Test Report: Docker_Linux_crio_arm64 21724

                    
                      cdde98f5260d5cfb20fef0dee46a24863d2037a7:2025-10-13:41893
                    
                

Test fail (36/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.48
35 TestAddons/parallel/Registry 18.19
36 TestAddons/parallel/RegistryCreds 0.53
37 TestAddons/parallel/Ingress 144.77
38 TestAddons/parallel/InspektorGadget 5.26
39 TestAddons/parallel/MetricsServer 6.36
41 TestAddons/parallel/CSI 38.48
42 TestAddons/parallel/Headlamp 3.18
43 TestAddons/parallel/CloudSpanner 5.28
44 TestAddons/parallel/LocalPath 8.43
45 TestAddons/parallel/NvidiaDevicePlugin 6.28
46 TestAddons/parallel/Yakd 5.28
98 TestFunctional/parallel/ServiceCmdConnect 603.48
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.93
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
136 TestFunctional/parallel/ServiceCmd/Format 0.59
137 TestFunctional/parallel/ServiceCmd/URL 0.5
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.39
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
191 TestJSONOutput/pause/Command 2.12
197 TestJSONOutput/unpause/Command 1.51
281 TestPause/serial/Pause 7.3
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.47
303 TestStartStop/group/old-k8s-version/serial/Pause 6.42
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.53
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.44
321 TestStartStop/group/no-preload/serial/Pause 7.98
327 TestStartStop/group/embed-certs/serial/Pause 8.79
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.42
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.76
341 TestStartStop/group/newest-cni/serial/Pause 5.88
350 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.24
x
+
TestAddons/serial/Volcano (0.48s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable volcano --alsologtostderr -v=1: exit status 11 (476.090097ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:15:58.908671  437275 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:15:58.910384  437275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:15:58.910402  437275 out.go:374] Setting ErrFile to fd 2...
	I1013 22:15:58.910408  437275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:15:58.910709  437275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:15:58.911017  437275 mustload.go:65] Loading cluster: addons-801288
	I1013 22:15:58.911589  437275 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:15:58.911613  437275 addons.go:606] checking whether the cluster is paused
	I1013 22:15:58.911723  437275 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:15:58.911745  437275 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:15:58.912200  437275 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:15:58.934154  437275 ssh_runner.go:195] Run: systemctl --version
	I1013 22:15:58.934215  437275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:15:58.952993  437275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:15:59.057675  437275 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:15:59.057764  437275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:15:59.087131  437275 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:15:59.087152  437275 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:15:59.087156  437275 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:15:59.087161  437275 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:15:59.087164  437275 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:15:59.087167  437275 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:15:59.087170  437275 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:15:59.087173  437275 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:15:59.087176  437275 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:15:59.087183  437275 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:15:59.087186  437275 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:15:59.087190  437275 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:15:59.087193  437275 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:15:59.087197  437275 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:15:59.087200  437275 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:15:59.087208  437275 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:15:59.087214  437275 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:15:59.087219  437275 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:15:59.087222  437275 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:15:59.087225  437275 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:15:59.087229  437275 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:15:59.087232  437275 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:15:59.087235  437275 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:15:59.087238  437275 cri.go:89] found id: ""
	I1013 22:15:59.087290  437275 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:15:59.101208  437275 out.go:203] 
	W1013 22:15:59.104112  437275 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:15:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:15:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:15:59.104139  437275 out.go:285] * 
	* 
	W1013 22:15:59.298672  437275 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:15:59.301571  437275 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 27.92917ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004558817s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003748447s
addons_test.go:392: (dbg) Run:  kubectl --context addons-801288 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-801288 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-801288 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.513551517s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 ip
2025/10/13 22:16:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable registry --alsologtostderr -v=1: exit status 11 (314.083692ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:16:27.533312  438293 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:16:27.534188  438293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:27.534200  438293 out.go:374] Setting ErrFile to fd 2...
	I1013 22:16:27.534206  438293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:27.534483  438293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:16:27.534774  438293 mustload.go:65] Loading cluster: addons-801288
	I1013 22:16:27.535204  438293 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:27.535221  438293 addons.go:606] checking whether the cluster is paused
	I1013 22:16:27.535328  438293 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:27.535343  438293 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:16:27.535780  438293 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:16:27.558140  438293 ssh_runner.go:195] Run: systemctl --version
	I1013 22:16:27.558203  438293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:16:27.575911  438293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:16:27.689996  438293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:16:27.690097  438293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:16:27.750243  438293 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:16:27.750269  438293 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:16:27.750274  438293 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:16:27.750292  438293 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:16:27.750296  438293 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:16:27.750300  438293 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:16:27.750304  438293 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:16:27.750307  438293 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:16:27.750310  438293 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:16:27.750316  438293 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:16:27.750320  438293 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:16:27.750324  438293 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:16:27.750327  438293 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:16:27.750331  438293 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:16:27.750334  438293 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:16:27.750338  438293 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:16:27.750341  438293 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:16:27.750345  438293 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:16:27.750348  438293 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:16:27.750351  438293 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:16:27.750356  438293 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:16:27.750359  438293 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:16:27.750362  438293 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:16:27.750365  438293 cri.go:89] found id: ""
	I1013 22:16:27.750422  438293 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:16:27.768366  438293 out.go:203] 
	W1013 22:16:27.771511  438293 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:16:27.771539  438293 out.go:285] * 
	* 
	W1013 22:16:27.780178  438293 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:16:27.783449  438293 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (18.19s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.154846ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-801288
addons_test.go:332: (dbg) Run:  kubectl --context addons-801288 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (255.858115ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:16:51.584602  439282 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:16:51.585316  439282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:51.585330  439282 out.go:374] Setting ErrFile to fd 2...
	I1013 22:16:51.585335  439282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:51.585600  439282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:16:51.585900  439282 mustload.go:65] Loading cluster: addons-801288
	I1013 22:16:51.586283  439282 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:51.586301  439282 addons.go:606] checking whether the cluster is paused
	I1013 22:16:51.586404  439282 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:51.586425  439282 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:16:51.586912  439282 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:16:51.604739  439282 ssh_runner.go:195] Run: systemctl --version
	I1013 22:16:51.604804  439282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:16:51.628988  439282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:16:51.730271  439282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:16:51.730354  439282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:16:51.761183  439282 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:16:51.761221  439282 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:16:51.761226  439282 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:16:51.761231  439282 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:16:51.761234  439282 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:16:51.761254  439282 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:16:51.761264  439282 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:16:51.761268  439282 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:16:51.761271  439282 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:16:51.761282  439282 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:16:51.761290  439282 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:16:51.761294  439282 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:16:51.761298  439282 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:16:51.761306  439282 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:16:51.761310  439282 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:16:51.761386  439282 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:16:51.761398  439282 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:16:51.761403  439282 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:16:51.761407  439282 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:16:51.761410  439282 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:16:51.761428  439282 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:16:51.761436  439282 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:16:51.761439  439282 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:16:51.761442  439282 cri.go:89] found id: ""
	I1013 22:16:51.761505  439282 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:16:51.776281  439282 out.go:203] 
	W1013 22:16:51.779128  439282 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:16:51.779149  439282 out.go:285] * 
	* 
	W1013 22:16:51.785738  439282 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:16:51.788655  439282 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-801288 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-801288 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-801288 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [310dbff3-0194-4026-bee1-4d0f356604b0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [310dbff3-0194-4026-bee1-4d0f356604b0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003431981s
I1013 22:16:49.077104  430652 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.519667811s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-801288 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-801288
helpers_test.go:243: (dbg) docker inspect addons-801288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077",
	        "Created": "2025-10-13T22:13:31.694503561Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 431817,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:13:31.755415379Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077/hostname",
	        "HostsPath": "/var/lib/docker/containers/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077/hosts",
	        "LogPath": "/var/lib/docker/containers/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077-json.log",
	        "Name": "/addons-801288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-801288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-801288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077",
	                "LowerDir": "/var/lib/docker/overlay2/a3e5e9350931ffec57d3c91312f59216677efcc103b3834e3541703e2a1a9651-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3e5e9350931ffec57d3c91312f59216677efcc103b3834e3541703e2a1a9651/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3e5e9350931ffec57d3c91312f59216677efcc103b3834e3541703e2a1a9651/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3e5e9350931ffec57d3c91312f59216677efcc103b3834e3541703e2a1a9651/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-801288",
	                "Source": "/var/lib/docker/volumes/addons-801288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-801288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-801288",
	                "name.minikube.sigs.k8s.io": "addons-801288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fcafabd3bb1b99d7e6223456dd38f856f9c25104d77ab365da1a11d226938ae0",
	            "SandboxKey": "/var/run/docker/netns/fcafabd3bb1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-801288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:ec:36:65:c8:89",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c74e1daa794b08aa86481b82dea805b06eed83f0512c353bf34e0ad53c7b7e7a",
	                    "EndpointID": "1ce71dc73c77af431e1e902f4e14f841628e6cbfc89c7186d230479ed13f0a4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-801288",
	                        "bcc7adeb9dda"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-801288 -n addons-801288
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-801288 logs -n 25: (1.485501994s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-659560                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-659560 │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │ 13 Oct 25 22:13 UTC │
	│ start   │ --download-only -p binary-mirror-193732 --alsologtostderr --binary-mirror http://127.0.0.1:45831 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-193732   │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │                     │
	│ delete  │ -p binary-mirror-193732                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-193732   │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │ 13 Oct 25 22:13 UTC │
	│ addons  │ disable dashboard -p addons-801288                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │                     │
	│ addons  │ enable dashboard -p addons-801288                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │                     │
	│ start   │ -p addons-801288 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │ 13 Oct 25 22:15 UTC │
	│ addons  │ addons-801288 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:15 UTC │                     │
	│ addons  │ addons-801288 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-801288 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ addons  │ addons-801288 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ ip      │ addons-801288 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │ 13 Oct 25 22:16 UTC │
	│ addons  │ addons-801288 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ addons  │ addons-801288 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ addons  │ addons-801288 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ ssh     │ addons-801288 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ addons  │ addons-801288 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ addons  │ addons-801288 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-801288                                                                                                                                                                                                                                                                                                                                                                                           │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │ 13 Oct 25 22:16 UTC │
	│ addons  │ addons-801288 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ addons  │ addons-801288 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ addons  │ addons-801288 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:17 UTC │                     │
	│ ssh     │ addons-801288 ssh cat /opt/local-path-provisioner/pvc-85b9cd0c-3387-41b6-94c8-0436514e03ca_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:17 UTC │ 13 Oct 25 22:17 UTC │
	│ addons  │ addons-801288 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:17 UTC │                     │
	│ addons  │ addons-801288 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:17 UTC │                     │
	│ ip      │ addons-801288 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:19 UTC │ 13 Oct 25 22:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:13:06
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:13:06.088900  431413 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:13:06.089071  431413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:13:06.089083  431413 out.go:374] Setting ErrFile to fd 2...
	I1013 22:13:06.089088  431413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:13:06.089372  431413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:13:06.089881  431413 out.go:368] Setting JSON to false
	I1013 22:13:06.090746  431413 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":6922,"bootTime":1760386664,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 22:13:06.090827  431413 start.go:141] virtualization:  
	I1013 22:13:06.094351  431413 out.go:179] * [addons-801288] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:13:06.097384  431413 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:13:06.097492  431413 notify.go:220] Checking for updates...
	I1013 22:13:06.103402  431413 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:13:06.106347  431413 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 22:13:06.109243  431413 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 22:13:06.112304  431413 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:13:06.115194  431413 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:13:06.118366  431413 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:13:06.148690  431413 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:13:06.148808  431413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:13:06.213455  431413 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-13 22:13:06.203780917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:13:06.213560  431413 docker.go:318] overlay module found
	I1013 22:13:06.216689  431413 out.go:179] * Using the docker driver based on user configuration
	I1013 22:13:06.219631  431413 start.go:305] selected driver: docker
	I1013 22:13:06.219661  431413 start.go:925] validating driver "docker" against <nil>
	I1013 22:13:06.219675  431413 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:13:06.220474  431413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:13:06.280769  431413 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-13 22:13:06.271515563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:13:06.280931  431413 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:13:06.281159  431413 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:13:06.284106  431413 out.go:179] * Using Docker driver with root privileges
	I1013 22:13:06.287033  431413 cni.go:84] Creating CNI manager for ""
	I1013 22:13:06.287172  431413 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:13:06.287195  431413 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:13:06.287277  431413 start.go:349] cluster config:
	{Name:addons-801288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-801288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1013 22:13:06.292174  431413 out.go:179] * Starting "addons-801288" primary control-plane node in "addons-801288" cluster
	I1013 22:13:06.295063  431413 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:13:06.298225  431413 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:13:06.301061  431413 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:13:06.301129  431413 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:13:06.301143  431413 cache.go:58] Caching tarball of preloaded images
	I1013 22:13:06.301145  431413 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:13:06.301307  431413 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:13:06.301321  431413 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:13:06.301655  431413 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/config.json ...
	I1013 22:13:06.301676  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/config.json: {Name:mk189791b193351cde1c6fb4f810c4fe55afe717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:06.317110  431413 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1013 22:13:06.317250  431413 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1013 22:13:06.317275  431413 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory, skipping pull
	I1013 22:13:06.317280  431413 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in cache, skipping pull
	I1013 22:13:06.317292  431413 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 as a tarball
	I1013 22:13:06.317298  431413 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from local cache
	I1013 22:13:24.424029  431413 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from cached tarball
	I1013 22:13:24.424076  431413 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:13:24.424106  431413 start.go:360] acquireMachinesLock for addons-801288: {Name:mk70e26ec42122cf271e40434c2fec37d8cdfa21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:13:24.424240  431413 start.go:364] duration metric: took 111.407µs to acquireMachinesLock for "addons-801288"
	I1013 22:13:24.424271  431413 start.go:93] Provisioning new machine with config: &{Name:addons-801288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-801288 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:13:24.424348  431413 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:13:24.427723  431413 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1013 22:13:24.427978  431413 start.go:159] libmachine.API.Create for "addons-801288" (driver="docker")
	I1013 22:13:24.428024  431413 client.go:168] LocalClient.Create starting
	I1013 22:13:24.428152  431413 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem
	I1013 22:13:25.045390  431413 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem
	I1013 22:13:26.072555  431413 cli_runner.go:164] Run: docker network inspect addons-801288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:13:26.089602  431413 cli_runner.go:211] docker network inspect addons-801288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:13:26.089691  431413 network_create.go:284] running [docker network inspect addons-801288] to gather additional debugging logs...
	I1013 22:13:26.089715  431413 cli_runner.go:164] Run: docker network inspect addons-801288
	W1013 22:13:26.106002  431413 cli_runner.go:211] docker network inspect addons-801288 returned with exit code 1
	I1013 22:13:26.106042  431413 network_create.go:287] error running [docker network inspect addons-801288]: docker network inspect addons-801288: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-801288 not found
	I1013 22:13:26.106059  431413 network_create.go:289] output of [docker network inspect addons-801288]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-801288 not found
	
	** /stderr **
	I1013 22:13:26.106160  431413 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:13:26.122981  431413 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ae8630}
	I1013 22:13:26.123026  431413 network_create.go:124] attempt to create docker network addons-801288 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1013 22:13:26.123107  431413 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-801288 addons-801288
	I1013 22:13:26.182969  431413 network_create.go:108] docker network addons-801288 192.168.49.0/24 created
	I1013 22:13:26.183002  431413 kic.go:121] calculated static IP "192.168.49.2" for the "addons-801288" container
	I1013 22:13:26.183075  431413 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:13:26.200082  431413 cli_runner.go:164] Run: docker volume create addons-801288 --label name.minikube.sigs.k8s.io=addons-801288 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:13:26.217431  431413 oci.go:103] Successfully created a docker volume addons-801288
	I1013 22:13:26.217517  431413 cli_runner.go:164] Run: docker run --rm --name addons-801288-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-801288 --entrypoint /usr/bin/test -v addons-801288:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:13:27.172435  431413 oci.go:107] Successfully prepared a docker volume addons-801288
	I1013 22:13:27.172527  431413 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:13:27.172562  431413 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:13:27.172687  431413 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-801288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:13:31.603904  431413 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-801288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.431152129s)
	I1013 22:13:31.603937  431413 kic.go:203] duration metric: took 4.431373908s to extract preloaded images to volume ...
	W1013 22:13:31.604083  431413 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:13:31.604197  431413 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:13:31.677573  431413 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-801288 --name addons-801288 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-801288 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-801288 --network addons-801288 --ip 192.168.49.2 --volume addons-801288:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:13:31.986588  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Running}}
	I1013 22:13:32.008313  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:13:32.033375  431413 cli_runner.go:164] Run: docker exec addons-801288 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:13:32.083508  431413 oci.go:144] the created container "addons-801288" has a running status.
	I1013 22:13:32.083536  431413 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa...
	I1013 22:13:32.586320  431413 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:13:32.613507  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:13:32.631000  431413 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:13:32.631021  431413 kic_runner.go:114] Args: [docker exec --privileged addons-801288 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:13:32.677488  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:13:32.695494  431413 machine.go:93] provisionDockerMachine start ...
	I1013 22:13:32.695606  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:32.711943  431413 main.go:141] libmachine: Using SSH client type: native
	I1013 22:13:32.712268  431413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1013 22:13:32.712288  431413 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:13:32.712888  431413 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34900->127.0.0.1:33163: read: connection reset by peer
	I1013 22:13:35.862977  431413 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-801288
	
	I1013 22:13:35.863000  431413 ubuntu.go:182] provisioning hostname "addons-801288"
	I1013 22:13:35.863112  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:35.880632  431413 main.go:141] libmachine: Using SSH client type: native
	I1013 22:13:35.880946  431413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1013 22:13:35.880962  431413 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-801288 && echo "addons-801288" | sudo tee /etc/hostname
	I1013 22:13:36.037547  431413 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-801288
	
	I1013 22:13:36.037626  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:36.057039  431413 main.go:141] libmachine: Using SSH client type: native
	I1013 22:13:36.057361  431413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1013 22:13:36.057377  431413 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-801288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-801288/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-801288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:13:36.207250  431413 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:13:36.207275  431413 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 22:13:36.207308  431413 ubuntu.go:190] setting up certificates
	I1013 22:13:36.207318  431413 provision.go:84] configureAuth start
	I1013 22:13:36.207384  431413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-801288
	I1013 22:13:36.223837  431413 provision.go:143] copyHostCerts
	I1013 22:13:36.223952  431413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 22:13:36.224099  431413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 22:13:36.224182  431413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 22:13:36.224285  431413 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.addons-801288 san=[127.0.0.1 192.168.49.2 addons-801288 localhost minikube]
	I1013 22:13:36.476699  431413 provision.go:177] copyRemoteCerts
	I1013 22:13:36.476766  431413 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:13:36.476812  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:36.494834  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:13:36.599015  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:13:36.616522  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 22:13:36.633758  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:13:36.650845  431413 provision.go:87] duration metric: took 443.512331ms to configureAuth
	I1013 22:13:36.650872  431413 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:13:36.651054  431413 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:13:36.651263  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:36.668276  431413 main.go:141] libmachine: Using SSH client type: native
	I1013 22:13:36.668619  431413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1013 22:13:36.668641  431413 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:13:36.919892  431413 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:13:36.919919  431413 machine.go:96] duration metric: took 4.224400784s to provisionDockerMachine
	I1013 22:13:36.919929  431413 client.go:171] duration metric: took 12.491893959s to LocalClient.Create
	I1013 22:13:36.919944  431413 start.go:167] duration metric: took 12.491968147s to libmachine.API.Create "addons-801288"
	I1013 22:13:36.919950  431413 start.go:293] postStartSetup for "addons-801288" (driver="docker")
	I1013 22:13:36.919960  431413 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:13:36.920032  431413 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:13:36.920088  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:36.938055  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:13:37.040439  431413 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:13:37.043932  431413 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:13:37.043964  431413 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:13:37.043975  431413 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 22:13:37.044042  431413 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 22:13:37.044070  431413 start.go:296] duration metric: took 124.114244ms for postStartSetup
	I1013 22:13:37.044381  431413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-801288
	I1013 22:13:37.061209  431413 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/config.json ...
	I1013 22:13:37.061507  431413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:13:37.061551  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:37.079403  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:13:37.180068  431413 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:13:37.184831  431413 start.go:128] duration metric: took 12.760467193s to createHost
	I1013 22:13:37.184853  431413 start.go:83] releasing machines lock for "addons-801288", held for 12.760598562s
	I1013 22:13:37.184933  431413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-801288
	I1013 22:13:37.201983  431413 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:13:37.202057  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:37.202318  431413 ssh_runner.go:195] Run: cat /version.json
	I1013 22:13:37.202362  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:37.222246  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:13:37.224908  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:13:37.409944  431413 ssh_runner.go:195] Run: systemctl --version
	I1013 22:13:37.416175  431413 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:13:37.451407  431413 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:13:37.455593  431413 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:13:37.455674  431413 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:13:37.483347  431413 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:13:37.483375  431413 start.go:495] detecting cgroup driver to use...
	I1013 22:13:37.483406  431413 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:13:37.483466  431413 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:13:37.500792  431413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:13:37.514097  431413 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:13:37.514162  431413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:13:37.532346  431413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:13:37.551438  431413 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:13:37.670053  431413 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:13:37.795096  431413 docker.go:234] disabling docker service ...
	I1013 22:13:37.795182  431413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:13:37.815097  431413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:13:37.828267  431413 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:13:37.941936  431413 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:13:38.059823  431413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:13:38.076855  431413 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:13:38.093602  431413 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:13:38.093679  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.104040  431413 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:13:38.104123  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.113398  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.122749  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.132322  431413 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:13:38.140934  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.150271  431413 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.164266  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.173838  431413 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:13:38.181956  431413 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:13:38.190631  431413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:13:38.300465  431413 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:13:38.435294  431413 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:13:38.435421  431413 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:13:38.439184  431413 start.go:563] Will wait 60s for crictl version
	I1013 22:13:38.439303  431413 ssh_runner.go:195] Run: which crictl
	I1013 22:13:38.442720  431413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:13:38.468623  431413 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:13:38.468772  431413 ssh_runner.go:195] Run: crio --version
	I1013 22:13:38.497615  431413 ssh_runner.go:195] Run: crio --version
	I1013 22:13:38.529615  431413 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:13:38.532444  431413 cli_runner.go:164] Run: docker network inspect addons-801288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:13:38.550631  431413 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1013 22:13:38.554540  431413 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:13:38.564330  431413 kubeadm.go:883] updating cluster {Name:addons-801288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-801288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:13:38.564459  431413 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:13:38.564520  431413 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:13:38.597733  431413 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:13:38.597760  431413 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:13:38.597820  431413 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:13:38.627389  431413 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:13:38.627414  431413 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:13:38.627422  431413 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1013 22:13:38.627516  431413 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-801288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-801288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:13:38.627619  431413 ssh_runner.go:195] Run: crio config
	I1013 22:13:38.680264  431413 cni.go:84] Creating CNI manager for ""
	I1013 22:13:38.680287  431413 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:13:38.680308  431413 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:13:38.680333  431413 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-801288 NodeName:addons-801288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:13:38.680478  431413 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-801288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:13:38.680552  431413 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:13:38.688464  431413 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:13:38.688579  431413 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:13:38.696340  431413 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1013 22:13:38.709579  431413 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:13:38.722369  431413 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1013 22:13:38.735041  431413 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:13:38.738653  431413 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:13:38.748252  431413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:13:38.869690  431413 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:13:38.886202  431413 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288 for IP: 192.168.49.2
	I1013 22:13:38.886268  431413 certs.go:195] generating shared ca certs ...
	I1013 22:13:38.886302  431413 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:38.886464  431413 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 22:13:39.330197  431413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt ...
	I1013 22:13:39.330227  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt: {Name:mk5023dbb88ff3c4b9af32c9937eb6ec5e270041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:39.330462  431413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key ...
	I1013 22:13:39.330478  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key: {Name:mkffd4b77e79837420b00658adbd480528e197d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:39.331313  431413 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 22:13:39.872952  431413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt ...
	I1013 22:13:39.872985  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt: {Name:mk6273f1afcfd01cccd9524e5147c2e91200566f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:39.873734  431413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key ...
	I1013 22:13:39.873751  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key: {Name:mkc7707ae989847021566a30f7a9177a0d38623b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:39.873839  431413 certs.go:257] generating profile certs ...
	I1013 22:13:39.873895  431413 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.key
	I1013 22:13:39.873913  431413 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt with IP's: []
	I1013 22:13:40.398479  431413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt ...
	I1013 22:13:40.398511  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: {Name:mk272d69de7ae58c64aa9603271795d35c92756a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.398708  431413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.key ...
	I1013 22:13:40.398735  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.key: {Name:mk98b60ef0d9536c388401822babaea3b25dad40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.398820  431413 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key.da37875c
	I1013 22:13:40.398846  431413 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt.da37875c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1013 22:13:40.495000  431413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt.da37875c ...
	I1013 22:13:40.495030  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt.da37875c: {Name:mkdea7fedef3ede0007aaabbf7f10d7be649e6a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.495220  431413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key.da37875c ...
	I1013 22:13:40.495245  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key.da37875c: {Name:mk1a8142b5e74627bc756f1d4c3b23f803629997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.495330  431413 certs.go:382] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt.da37875c -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt
	I1013 22:13:40.495413  431413 certs.go:386] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key.da37875c -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key
	I1013 22:13:40.495470  431413 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.key
	I1013 22:13:40.495491  431413 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.crt with IP's: []
	I1013 22:13:40.922203  431413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.crt ...
	I1013 22:13:40.922234  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.crt: {Name:mkc135bb90d68af9be1c55c33d73e6d39c3043ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.922421  431413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.key ...
	I1013 22:13:40.922436  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.key: {Name:mk63d144de190b74c106e99fe8c2cd486bb8d634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.922629  431413 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:13:40.922669  431413 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:13:40.922699  431413 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:13:40.922731  431413 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 22:13:40.923311  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:13:40.941572  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:13:40.959352  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:13:40.976692  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:13:40.994938  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 22:13:41.013794  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:13:41.031960  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:13:41.049815  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:13:41.067602  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:13:41.085275  431413 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:13:41.098599  431413 ssh_runner.go:195] Run: openssl version
	I1013 22:13:41.105121  431413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:13:41.113241  431413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:13:41.116837  431413 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:13:41.116933  431413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:13:41.157708  431413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:13:41.166313  431413 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:13:41.170173  431413 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:13:41.170241  431413 kubeadm.go:400] StartCluster: {Name:addons-801288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-801288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:13:41.170335  431413 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:13:41.170394  431413 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:13:41.200810  431413 cri.go:89] found id: ""
	I1013 22:13:41.200934  431413 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:13:41.208834  431413 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:13:41.216607  431413 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:13:41.216677  431413 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:13:41.224276  431413 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:13:41.224297  431413 kubeadm.go:157] found existing configuration files:
	
	I1013 22:13:41.224346  431413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:13:41.233709  431413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:13:41.233778  431413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:13:41.242452  431413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:13:41.253901  431413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:13:41.253968  431413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:13:41.262478  431413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:13:41.272862  431413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:13:41.272932  431413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:13:41.281392  431413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:13:41.289172  431413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:13:41.289241  431413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:13:41.296817  431413 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:13:41.340507  431413 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:13:41.340569  431413 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:13:41.364320  431413 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:13:41.364400  431413 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:13:41.364440  431413 kubeadm.go:318] OS: Linux
	I1013 22:13:41.364493  431413 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:13:41.364554  431413 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:13:41.364607  431413 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:13:41.364663  431413 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:13:41.364719  431413 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:13:41.364774  431413 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:13:41.364826  431413 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:13:41.364880  431413 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:13:41.364933  431413 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:13:41.432455  431413 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:13:41.432577  431413 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:13:41.432678  431413 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:13:41.440560  431413 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:13:41.444095  431413 out.go:252]   - Generating certificates and keys ...
	I1013 22:13:41.444199  431413 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:13:41.444272  431413 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:13:41.864406  431413 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:13:41.963483  431413 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:13:42.734285  431413 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:13:43.515894  431413 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:13:43.688648  431413 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:13:43.688824  431413 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-801288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 22:13:44.248201  431413 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:13:44.248823  431413 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-801288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 22:13:44.400408  431413 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:13:45.120473  431413 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:13:45.380956  431413 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:13:45.381288  431413 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:13:45.806143  431413 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:13:46.717273  431413 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:13:46.848950  431413 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:13:47.527491  431413 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:13:47.743049  431413 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:13:47.743712  431413 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:13:47.746469  431413 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:13:47.749892  431413 out.go:252]   - Booting up control plane ...
	I1013 22:13:47.750008  431413 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:13:47.750091  431413 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:13:47.750171  431413 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:13:47.765510  431413 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:13:47.765825  431413 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:13:47.773892  431413 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:13:47.774259  431413 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:13:47.774325  431413 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:13:47.904511  431413 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:13:47.904637  431413 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:13:49.405182  431413 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500953433s
	I1013 22:13:49.408782  431413 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:13:49.408885  431413 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1013 22:13:49.409172  431413 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:13:49.409267  431413 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:13:50.997667  431413 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.588370048s
	I1013 22:13:53.681700  431413 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.272889071s
	I1013 22:13:55.410830  431413 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001952217s
	I1013 22:13:55.430688  431413 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:13:55.445522  431413 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:13:55.457711  431413 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:13:55.457928  431413 kubeadm.go:318] [mark-control-plane] Marking the node addons-801288 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:13:55.470684  431413 kubeadm.go:318] [bootstrap-token] Using token: iuujyr.pmmj8z57kgb438qe
	I1013 22:13:55.473894  431413 out.go:252]   - Configuring RBAC rules ...
	I1013 22:13:55.474029  431413 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:13:55.480866  431413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:13:55.488903  431413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:13:55.492932  431413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:13:55.503248  431413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:13:55.507373  431413 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:13:55.820323  431413 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:13:56.255780  431413 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:13:56.817588  431413 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:13:56.818918  431413 kubeadm.go:318] 
	I1013 22:13:56.818999  431413 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:13:56.819009  431413 kubeadm.go:318] 
	I1013 22:13:56.819108  431413 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:13:56.819119  431413 kubeadm.go:318] 
	I1013 22:13:56.819146  431413 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:13:56.819212  431413 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:13:56.819268  431413 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:13:56.819276  431413 kubeadm.go:318] 
	I1013 22:13:56.819333  431413 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:13:56.819342  431413 kubeadm.go:318] 
	I1013 22:13:56.819393  431413 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:13:56.819402  431413 kubeadm.go:318] 
	I1013 22:13:56.819457  431413 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:13:56.819539  431413 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:13:56.819621  431413 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:13:56.819630  431413 kubeadm.go:318] 
	I1013 22:13:56.819719  431413 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:13:56.819804  431413 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:13:56.819811  431413 kubeadm.go:318] 
	I1013 22:13:56.819906  431413 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token iuujyr.pmmj8z57kgb438qe \
	I1013 22:13:56.820015  431413 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 \
	I1013 22:13:56.820036  431413 kubeadm.go:318] 	--control-plane 
	I1013 22:13:56.820041  431413 kubeadm.go:318] 
	I1013 22:13:56.820129  431413 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:13:56.820134  431413 kubeadm.go:318] 
	I1013 22:13:56.820219  431413 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token iuujyr.pmmj8z57kgb438qe \
	I1013 22:13:56.820326  431413 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 
	I1013 22:13:56.824275  431413 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:13:56.824517  431413 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:13:56.824629  431413 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:13:56.824649  431413 cni.go:84] Creating CNI manager for ""
	I1013 22:13:56.824657  431413 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:13:56.827761  431413 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 22:13:56.830554  431413 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:13:56.834586  431413 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:13:56.834607  431413 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:13:56.848063  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:13:57.129628  431413 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:13:57.129761  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:57.129846  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-801288 minikube.k8s.io/updated_at=2025_10_13T22_13_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=addons-801288 minikube.k8s.io/primary=true
	I1013 22:13:57.272756  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:57.272637  431413 ops.go:34] apiserver oom_adj: -16
	I1013 22:13:57.773379  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:58.272794  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:58.773542  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:59.273404  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:59.772859  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:14:00.273656  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:14:00.773504  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:14:00.862097  431413 kubeadm.go:1113] duration metric: took 3.732382676s to wait for elevateKubeSystemPrivileges
	I1013 22:14:00.862128  431413 kubeadm.go:402] duration metric: took 19.691910251s to StartCluster
	I1013 22:14:00.862144  431413 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:14:00.862258  431413 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 22:14:00.862687  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:14:00.862878  431413 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:14:00.863008  431413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:14:00.863292  431413 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:14:00.863400  431413 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 22:14:00.863478  431413 addons.go:69] Setting yakd=true in profile "addons-801288"
	I1013 22:14:00.863490  431413 addons.go:238] Setting addon yakd=true in "addons-801288"
	I1013 22:14:00.863512  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.864132  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.864568  431413 addons.go:69] Setting inspektor-gadget=true in profile "addons-801288"
	I1013 22:14:00.864584  431413 addons.go:238] Setting addon inspektor-gadget=true in "addons-801288"
	I1013 22:14:00.864607  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.865014  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.867467  431413 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-801288"
	I1013 22:14:00.867565  431413 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-801288"
	I1013 22:14:00.867597  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.868042  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.868643  431413 addons.go:69] Setting metrics-server=true in profile "addons-801288"
	I1013 22:14:00.869066  431413 addons.go:238] Setting addon metrics-server=true in "addons-801288"
	I1013 22:14:00.869116  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.873080  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.868782  431413 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-801288"
	I1013 22:14:00.875877  431413 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-801288"
	I1013 22:14:00.868793  431413 addons.go:69] Setting registry=true in profile "addons-801288"
	I1013 22:14:00.868800  431413 addons.go:69] Setting registry-creds=true in profile "addons-801288"
	I1013 22:14:00.868806  431413 addons.go:69] Setting storage-provisioner=true in profile "addons-801288"
	I1013 22:14:00.868812  431413 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-801288"
	I1013 22:14:00.868817  431413 addons.go:69] Setting volcano=true in profile "addons-801288"
	I1013 22:14:00.868822  431413 addons.go:69] Setting volumesnapshots=true in profile "addons-801288"
	I1013 22:14:00.868830  431413 out.go:179] * Verifying Kubernetes components...
	I1013 22:14:00.868989  431413 addons.go:69] Setting gcp-auth=true in profile "addons-801288"
	I1013 22:14:00.868997  431413 addons.go:69] Setting cloud-spanner=true in profile "addons-801288"
	I1013 22:14:00.869005  431413 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-801288"
	I1013 22:14:00.869011  431413 addons.go:69] Setting default-storageclass=true in profile "addons-801288"
	I1013 22:14:00.869018  431413 addons.go:69] Setting ingress-dns=true in profile "addons-801288"
	I1013 22:14:00.869032  431413 addons.go:69] Setting ingress=true in profile "addons-801288"
	I1013 22:14:00.879188  431413 addons.go:238] Setting addon ingress=true in "addons-801288"
	I1013 22:14:00.879271  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.879838  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.886116  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.886678  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.893929  431413 addons.go:238] Setting addon cloud-spanner=true in "addons-801288"
	I1013 22:14:00.894039  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.894597  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.898863  431413 addons.go:238] Setting addon registry=true in "addons-801288"
	I1013 22:14:00.898967  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.899581  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.903175  431413 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-801288"
	I1013 22:14:00.903227  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.903667  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.920280  431413 addons.go:238] Setting addon registry-creds=true in "addons-801288"
	I1013 22:14:00.920377  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.920870  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.931175  431413 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-801288"
	I1013 22:14:00.931553  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.948928  431413 addons.go:238] Setting addon ingress-dns=true in "addons-801288"
	I1013 22:14:00.948995  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.949482  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.954944  431413 addons.go:238] Setting addon storage-provisioner=true in "addons-801288"
	I1013 22:14:00.955001  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.955485  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.975992  431413 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-801288"
	I1013 22:14:00.976333  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.999871  431413 addons.go:238] Setting addon volcano=true in "addons-801288"
	I1013 22:14:01.000001  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:01.000511  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:01.014818  431413 addons.go:238] Setting addon volumesnapshots=true in "addons-801288"
	I1013 22:14:01.014877  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:01.015473  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:01.031953  431413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:14:01.032374  431413 mustload.go:65] Loading cluster: addons-801288
	I1013 22:14:01.032587  431413 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:14:01.032825  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:01.063431  431413 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 22:14:01.068958  431413 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 22:14:01.068985  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 22:14:01.069084  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.088392  431413 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 22:14:01.101168  431413 addons.go:238] Setting addon default-storageclass=true in "addons-801288"
	I1013 22:14:01.101212  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:01.101649  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:01.101974  431413 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 22:14:01.107357  431413 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 22:14:01.107678  431413 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-801288"
	I1013 22:14:01.107729  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:01.112041  431413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:14:01.112497  431413 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 22:14:01.112513  431413 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 22:14:01.113330  431413 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 22:14:01.113392  431413 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 22:14:01.113500  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.117050  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	W1013 22:14:01.124680  431413 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1013 22:14:01.124953  431413 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 22:14:01.125805  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:01.130106  431413 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 22:14:01.130128  431413 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 22:14:01.130203  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.155322  431413 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 22:14:01.158323  431413 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 22:14:01.158350  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 22:14:01.158426  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.158763  431413 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1013 22:14:01.161740  431413 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 22:14:01.164679  431413 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 22:14:01.167572  431413 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 22:14:01.167597  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 22:14:01.167665  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.181253  431413 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 22:14:01.184233  431413 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 22:14:01.187049  431413 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 22:14:01.187160  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 22:14:01.187244  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.206761  431413 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 22:14:01.206793  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 22:14:01.207396  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.214181  431413 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 22:14:01.219417  431413 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 22:14:01.219442  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 22:14:01.219512  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.248850  431413 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 22:14:01.251677  431413 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 22:14:01.251709  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 22:14:01.251776  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.259963  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.270241  431413 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:14:01.270261  431413 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:14:01.270318  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.263520  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:01.278079  431413 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:14:01.282277  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 22:14:01.287419  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.296552  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1013 22:14:01.298601  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 22:14:01.298660  431413 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 22:14:01.298772  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.343293  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 22:14:01.343623  431413 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:14:01.343641  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:14:01.343706  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.364299  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 22:14:01.367502  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 22:14:01.370174  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.376281  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.380051  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 22:14:01.384614  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 22:14:01.393626  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 22:14:01.398077  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 22:14:01.403385  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.404673  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.404763  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 22:14:01.406574  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 22:14:01.406691  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.421214  431413 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 22:14:01.424884  431413 out.go:179]   - Using image docker.io/busybox:stable
	I1013 22:14:01.427632  431413 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 22:14:01.427667  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 22:14:01.427763  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.473793  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.483149  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.489636  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.507377  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.530305  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.545079  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.547760  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.550463  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	W1013 22:14:01.550974  431413 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1013 22:14:01.551011  431413 retry.go:31] will retry after 178.792854ms: ssh: handshake failed: EOF
	W1013 22:14:01.559187  431413 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1013 22:14:01.559275  431413 retry.go:31] will retry after 249.651777ms: ssh: handshake failed: EOF
	I1013 22:14:01.559454  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.561188  431413 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1013 22:14:01.809685  431413 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1013 22:14:01.809720  431413 retry.go:31] will retry after 309.993973ms: ssh: handshake failed: EOF
	I1013 22:14:01.978807  431413 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:01.978883  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 22:14:02.052418  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:02.055866  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 22:14:02.065020  431413 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 22:14:02.065046  431413 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 22:14:02.143888  431413 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 22:14:02.143915  431413 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 22:14:02.180856  431413 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 22:14:02.180880  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 22:14:02.265884  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 22:14:02.274649  431413 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 22:14:02.274676  431413 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 22:14:02.299510  431413 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 22:14:02.299534  431413 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 22:14:02.322054  431413 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 22:14:02.322080  431413 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 22:14:02.343417  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 22:14:02.372768  431413 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 22:14:02.372792  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 22:14:02.378202  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 22:14:02.417106  431413 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 22:14:02.417126  431413 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 22:14:02.461530  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 22:14:02.462433  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:14:02.541316  431413 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 22:14:02.541340  431413 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 22:14:02.544408  431413 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 22:14:02.544434  431413 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 22:14:02.549323  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 22:14:02.552901  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:14:02.569979  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 22:14:02.572748  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 22:14:02.640201  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 22:14:02.640278  431413 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 22:14:02.675890  431413 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 22:14:02.675974  431413 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 22:14:02.784660  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 22:14:02.861741  431413 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 22:14:02.861766  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 22:14:02.886378  431413 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 22:14:02.886401  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 22:14:03.108705  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 22:14:03.129044  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 22:14:03.129074  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 22:14:03.154716  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 22:14:03.324211  431413 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.212122274s)
	I1013 22:14:03.324243  431413 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1013 22:14:03.325208  431413 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.764000832s)
	I1013 22:14:03.325810  431413 node_ready.go:35] waiting up to 6m0s for node "addons-801288" to be "Ready" ...
	I1013 22:14:03.531993  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 22:14:03.532020  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 22:14:03.793919  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 22:14:03.793945  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 22:14:03.829681  431413 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-801288" context rescaled to 1 replicas
	I1013 22:14:04.024611  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 22:14:04.024679  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 22:14:04.214086  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 22:14:04.214158  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 22:14:04.451861  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 22:14:04.451888  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 22:14:04.771377  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 22:14:04.771404  431413 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 22:14:05.034976  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 22:14:05.035005  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	W1013 22:14:05.331073  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:05.392678  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 22:14:05.392702  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 22:14:05.606863  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 22:14:05.606891  431413 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 22:14:05.712748  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 22:14:06.304237  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.25178547s)
	W1013 22:14:06.304273  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:06.304294  431413 retry.go:31] will retry after 315.57066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:06.620538  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1013 22:14:07.341719  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:07.692540  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.636591448s)
	I1013 22:14:07.692576  431413 addons.go:479] Verifying addon ingress=true in "addons-801288"
	I1013 22:14:07.692940  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.42703082s)
	I1013 22:14:07.692992  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.34955039s)
	I1013 22:14:07.693029  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.314804158s)
	I1013 22:14:07.693072  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.231517222s)
	I1013 22:14:07.693107  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.230654659s)
	I1013 22:14:07.693294  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.143945039s)
	I1013 22:14:07.693356  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.140430361s)
	I1013 22:14:07.693394  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.123387093s)
	I1013 22:14:07.693409  431413 addons.go:479] Verifying addon registry=true in "addons-801288"
	I1013 22:14:07.693756  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.120939823s)
	I1013 22:14:07.693940  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.909249121s)
	I1013 22:14:07.693962  431413 addons.go:479] Verifying addon metrics-server=true in "addons-801288"
	I1013 22:14:07.694015  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.585268341s)
	I1013 22:14:07.694160  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.539413835s)
	W1013 22:14:07.694191  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 22:14:07.694209  431413 retry.go:31] will retry after 155.875955ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 22:14:07.697367  431413 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-801288 service yakd-dashboard -n yakd-dashboard
	
	I1013 22:14:07.697457  431413 out.go:179] * Verifying registry addon...
	I1013 22:14:07.697487  431413 out.go:179] * Verifying ingress addon...
	I1013 22:14:07.701207  431413 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 22:14:07.701207  431413 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 22:14:07.716637  431413 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 22:14:07.716664  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:07.716872  431413 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 22:14:07.716889  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:07.722223  431413 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1013 22:14:07.850548  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 22:14:08.125530  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.412720723s)
	I1013 22:14:08.125650  431413 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-801288"
	I1013 22:14:08.128865  431413 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 22:14:08.132587  431413 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 22:14:08.152756  431413 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 22:14:08.152784  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:08.220306  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.599726793s)
	W1013 22:14:08.220395  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:08.220523  431413 retry.go:31] will retry after 223.794412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:08.250317  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:08.250862  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:08.444613  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:08.636421  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:08.737266  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:08.737913  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:08.889660  431413 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 22:14:08.889747  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:08.912729  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:09.096306  431413 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 22:14:09.109755  431413 addons.go:238] Setting addon gcp-auth=true in "addons-801288"
	I1013 22:14:09.109802  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:09.110241  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:09.136341  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:09.138017  431413 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 22:14:09.138077  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:09.163693  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:09.207293  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:09.207544  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:09.438427  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:09.438465  431413 retry.go:31] will retry after 610.373608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:09.442213  431413 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 22:14:09.445105  431413 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 22:14:09.447946  431413 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 22:14:09.447977  431413 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 22:14:09.461315  431413 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 22:14:09.461337  431413 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 22:14:09.474115  431413 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 22:14:09.474138  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 22:14:09.487643  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 22:14:09.636060  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:09.706312  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:09.707407  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 22:14:09.831044  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:09.953698  431413 addons.go:479] Verifying addon gcp-auth=true in "addons-801288"
	I1013 22:14:09.956361  431413 out.go:179] * Verifying gcp-auth addon...
	I1013 22:14:09.960028  431413 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 22:14:09.977098  431413 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 22:14:09.977171  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:10.049513  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:10.137782  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:10.240951  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:10.241509  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:10.463983  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:10.635783  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:10.705388  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:10.706322  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 22:14:10.894957  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:10.894990  431413 retry.go:31] will retry after 1.221428298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:10.964011  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:11.136472  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:11.204861  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:11.205002  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:11.464150  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:11.636054  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:11.705303  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:11.705778  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:11.963192  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:12.117354  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:12.135766  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:12.205202  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:12.206082  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:12.329237  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:12.463783  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:12.636571  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:12.704589  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:12.706106  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:12.945151  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:12.945187  431413 retry.go:31] will retry after 1.258306834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:12.962700  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:13.135987  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:13.205401  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:13.206649  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:13.463295  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:13.636510  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:13.705483  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:13.705841  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:13.963374  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:14.136579  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:14.204662  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:14.204769  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:14.207253  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 22:14:14.329929  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:14.463353  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:14.638071  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:14.709209  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:14.710118  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:14.963489  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 22:14:15.098893  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:15.098974  431413 retry.go:31] will retry after 2.430229456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:15.135999  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:15.205821  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:15.205942  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:15.464090  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:15.636189  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:15.704372  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:15.704769  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:15.963525  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:16.135628  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:16.204763  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:16.204927  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:16.463321  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:16.636505  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:16.704707  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:16.704895  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:16.828802  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:16.963946  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:17.136217  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:17.205222  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:17.205498  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:17.462916  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:17.529971  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:17.636322  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:17.706552  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:17.707153  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:17.963750  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:18.136285  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:18.206494  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:18.206606  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:18.364144  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:18.364232  431413 retry.go:31] will retry after 3.557976141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:18.462986  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:18.635928  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:18.705412  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:18.705630  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:18.829328  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:18.963319  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:19.136054  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:19.205564  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:19.205790  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:19.463197  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:19.635854  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:19.704665  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:19.705260  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:19.963041  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:20.136037  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:20.205736  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:20.205896  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:20.463247  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:20.636051  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:20.705355  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:20.705456  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:20.963129  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:21.135944  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:21.205177  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:21.205424  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:21.329158  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:21.463071  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:21.635978  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:21.705826  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:21.706109  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:21.923146  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:21.963546  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:22.136514  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:22.206528  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:22.207409  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:22.464287  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:22.637074  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:22.706724  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:22.706973  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:22.772882  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:22.772919  431413 retry.go:31] will retry after 3.841219822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:22.964041  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:23.136162  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:23.205374  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:23.205633  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:23.329745  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:23.464145  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:23.636290  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:23.705261  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:23.705445  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:23.963464  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:24.135468  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:24.204771  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:24.205004  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:24.463850  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:24.635722  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:24.705224  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:24.705307  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:24.963050  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:25.136280  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:25.205258  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:25.205347  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:25.463993  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:25.635766  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:25.705105  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:25.705549  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 22:14:25.829489  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:25.963315  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:26.136325  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:26.204672  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:26.204804  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:26.463738  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:26.615166  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:26.635998  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:26.706439  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:26.706722  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:26.963174  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:27.136965  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:27.206397  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:27.206870  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:27.416899  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:27.416934  431413 retry.go:31] will retry after 5.688273921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:27.463876  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:27.635919  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:27.705357  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:27.705515  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:27.963610  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:28.135884  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:28.205116  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:28.205503  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:28.329212  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:28.463470  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:28.636443  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:28.704937  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:28.705069  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:28.963886  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:29.135781  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:29.205590  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:29.206775  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:29.463763  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:29.635265  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:29.705714  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:29.705832  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:29.963714  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:30.135899  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:30.205223  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:30.205394  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:30.335647  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:30.464045  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:30.636117  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:30.705702  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:30.706372  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:30.963385  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:31.136055  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:31.205270  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:31.205420  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:31.463427  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:31.635687  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:31.705127  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:31.705365  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:31.963408  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:32.136810  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:32.205539  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:32.205961  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:32.463545  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:32.636354  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:32.704630  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:32.704923  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:32.828645  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:32.964090  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:33.106285  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:33.136340  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:33.205861  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:33.206478  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:33.463326  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:33.637020  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:33.706125  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:33.706734  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:33.958274  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:33.958353  431413 retry.go:31] will retry after 8.699411523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:33.963100  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:34.136063  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:34.205031  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:34.205264  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:34.463560  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:34.636328  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:34.705298  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:34.705448  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:34.829416  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:34.963136  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:35.136558  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:35.204976  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:35.205158  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:35.464398  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:35.636474  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:35.705398  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:35.705532  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:35.963373  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:36.136528  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:36.205120  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:36.205412  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:36.463552  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:36.636394  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:36.705857  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:36.706004  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:36.962880  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:37.136107  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:37.205313  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:37.205551  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:37.329615  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:37.463589  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:37.636543  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:37.704814  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:37.705219  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:37.962908  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:38.136784  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:38.205396  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:38.205569  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:38.463908  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:38.635944  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:38.705350  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:38.705417  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:38.963433  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:39.136497  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:39.205468  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:39.205641  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:39.463546  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:39.635679  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:39.705232  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:39.705400  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:39.829262  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:39.963486  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:40.136819  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:40.205167  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:40.205323  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:40.463247  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:40.635961  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:40.705518  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:40.705622  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:40.963376  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:41.135963  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:41.205341  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:41.205929  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:41.463839  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:41.636126  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:41.705597  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:41.705656  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:41.829364  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:41.963054  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:42.137703  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:42.205325  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:42.206055  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:42.462940  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:42.635988  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:42.658181  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:42.706325  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:42.706745  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:42.963930  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:43.161279  431413 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 22:14:43.161343  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:43.266556  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:43.267042  431413 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 22:14:43.267121  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:43.335970  431413 node_ready.go:49] node "addons-801288" is "Ready"
	I1013 22:14:43.336078  431413 node_ready.go:38] duration metric: took 40.010239803s for node "addons-801288" to be "Ready" ...
	I1013 22:14:43.336108  431413 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:14:43.336202  431413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:14:43.537560  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:43.671782  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:43.729006  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:43.729475  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:43.976350  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:44.146106  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:44.205358  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:44.209258  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:44.312358  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.65414049s)
	W1013 22:14:44.312435  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:44.312512  431413 retry.go:31] will retry after 15.4515216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:44.312583  431413 api_server.go:72] duration metric: took 43.449682894s to wait for apiserver process to appear ...
	I1013 22:14:44.312605  431413 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:14:44.312654  431413 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1013 22:14:44.321111  431413 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1013 22:14:44.322143  431413 api_server.go:141] control plane version: v1.34.1
	I1013 22:14:44.322164  431413 api_server.go:131] duration metric: took 9.521378ms to wait for apiserver health ...
	I1013 22:14:44.322173  431413 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:14:44.328903  431413 system_pods.go:59] 19 kube-system pods found
	I1013 22:14:44.328983  431413 system_pods.go:61] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:14:44.329009  431413 system_pods.go:61] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:44.329050  431413 system_pods.go:61] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:44.329076  431413 system_pods.go:61] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:44.329101  431413 system_pods.go:61] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:44.329123  431413 system_pods.go:61] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:44.329158  431413 system_pods.go:61] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:44.329185  431413 system_pods.go:61] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:44.329208  431413 system_pods.go:61] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:44.329229  431413 system_pods.go:61] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:44.329261  431413 system_pods.go:61] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:44.329286  431413 system_pods.go:61] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:44.329321  431413 system_pods.go:61] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:44.329344  431413 system_pods.go:61] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:44.329374  431413 system_pods.go:61] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:44.329400  431413 system_pods.go:61] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:44.329423  431413 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.329446  431413 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.329479  431413 system_pods.go:61] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:14:44.329506  431413 system_pods.go:74] duration metric: took 7.326099ms to wait for pod list to return data ...
	I1013 22:14:44.329529  431413 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:14:44.348254  431413 default_sa.go:45] found service account: "default"
	I1013 22:14:44.348330  431413 default_sa.go:55] duration metric: took 18.778999ms for default service account to be created ...
	I1013 22:14:44.348358  431413 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:14:44.427545  431413 system_pods.go:86] 19 kube-system pods found
	I1013 22:14:44.427628  431413 system_pods.go:89] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:14:44.427652  431413 system_pods.go:89] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:44.427677  431413 system_pods.go:89] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:44.427722  431413 system_pods.go:89] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:44.427745  431413 system_pods.go:89] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:44.427765  431413 system_pods.go:89] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:44.427795  431413 system_pods.go:89] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:44.427819  431413 system_pods.go:89] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:44.427902  431413 system_pods.go:89] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:44.427926  431413 system_pods.go:89] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:44.427947  431413 system_pods.go:89] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:44.427968  431413 system_pods.go:89] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:44.428002  431413 system_pods.go:89] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:44.428030  431413 system_pods.go:89] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:44.428050  431413 system_pods.go:89] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:44.428071  431413 system_pods.go:89] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:44.428108  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.428137  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.428159  431413 system_pods.go:89] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:14:44.428191  431413 retry.go:31] will retry after 206.118733ms: missing components: kube-dns
	I1013 22:14:44.526600  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:44.636711  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:44.640817  431413 system_pods.go:86] 19 kube-system pods found
	I1013 22:14:44.640906  431413 system_pods.go:89] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:14:44.640929  431413 system_pods.go:89] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:44.640968  431413 system_pods.go:89] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:44.640995  431413 system_pods.go:89] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:44.641018  431413 system_pods.go:89] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:44.641039  431413 system_pods.go:89] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:44.641071  431413 system_pods.go:89] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:44.641094  431413 system_pods.go:89] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:44.641115  431413 system_pods.go:89] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:44.641134  431413 system_pods.go:89] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:44.641154  431413 system_pods.go:89] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:44.641183  431413 system_pods.go:89] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:44.641216  431413 system_pods.go:89] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:44.641238  431413 system_pods.go:89] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:44.641262  431413 system_pods.go:89] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:44.641293  431413 system_pods.go:89] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:44.641324  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.641349  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.641369  431413 system_pods.go:89] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Running
	I1013 22:14:44.641410  431413 retry.go:31] will retry after 235.031412ms: missing components: kube-dns
	I1013 22:14:44.756030  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:44.756267  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:44.883611  431413 system_pods.go:86] 19 kube-system pods found
	I1013 22:14:44.883713  431413 system_pods.go:89] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:14:44.883737  431413 system_pods.go:89] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:44.883772  431413 system_pods.go:89] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:44.883799  431413 system_pods.go:89] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:44.883818  431413 system_pods.go:89] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:44.883877  431413 system_pods.go:89] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:44.883902  431413 system_pods.go:89] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:44.883921  431413 system_pods.go:89] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:44.883943  431413 system_pods.go:89] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:44.883963  431413 system_pods.go:89] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:44.883998  431413 system_pods.go:89] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:44.884018  431413 system_pods.go:89] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:44.884041  431413 system_pods.go:89] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:44.884079  431413 system_pods.go:89] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:44.884103  431413 system_pods.go:89] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:44.884125  431413 system_pods.go:89] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:44.884146  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.884180  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.884203  431413 system_pods.go:89] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Running
	I1013 22:14:44.884233  431413 retry.go:31] will retry after 342.812301ms: missing components: kube-dns
	I1013 22:14:44.981237  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:45.137758  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:45.217052  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:45.217343  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:45.233422  431413 system_pods.go:86] 19 kube-system pods found
	I1013 22:14:45.233526  431413 system_pods.go:89] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:14:45.233598  431413 system_pods.go:89] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:45.233640  431413 system_pods.go:89] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:45.233671  431413 system_pods.go:89] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:45.233695  431413 system_pods.go:89] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:45.233731  431413 system_pods.go:89] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:45.233751  431413 system_pods.go:89] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:45.233771  431413 system_pods.go:89] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:45.233806  431413 system_pods.go:89] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:45.233827  431413 system_pods.go:89] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:45.233848  431413 system_pods.go:89] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:45.233871  431413 system_pods.go:89] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:45.233906  431413 system_pods.go:89] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:45.233934  431413 system_pods.go:89] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:45.233958  431413 system_pods.go:89] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:45.233981  431413 system_pods.go:89] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:45.234015  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:45.234045  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:45.234091  431413 system_pods.go:89] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Running
	I1013 22:14:45.234139  431413 retry.go:31] will retry after 534.817329ms: missing components: kube-dns
	I1013 22:14:45.464484  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:45.642785  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:45.706719  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:45.707021  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:45.775336  431413 system_pods.go:86] 19 kube-system pods found
	I1013 22:14:45.775413  431413 system_pods.go:89] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Running
	I1013 22:14:45.775444  431413 system_pods.go:89] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:45.775483  431413 system_pods.go:89] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:45.775510  431413 system_pods.go:89] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:45.775529  431413 system_pods.go:89] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:45.775549  431413 system_pods.go:89] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:45.775588  431413 system_pods.go:89] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:45.775613  431413 system_pods.go:89] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:45.775633  431413 system_pods.go:89] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:45.775653  431413 system_pods.go:89] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:45.775672  431413 system_pods.go:89] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:45.775703  431413 system_pods.go:89] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:45.775728  431413 system_pods.go:89] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:45.775752  431413 system_pods.go:89] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:45.775778  431413 system_pods.go:89] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:45.775810  431413 system_pods.go:89] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:45.775850  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:45.775871  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:45.775891  431413 system_pods.go:89] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Running
	I1013 22:14:45.775927  431413 system_pods.go:126] duration metric: took 1.427549017s to wait for k8s-apps to be running ...
	I1013 22:14:45.775954  431413 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:14:45.776038  431413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:14:45.792808  431413 system_svc.go:56] duration metric: took 16.846057ms WaitForService to wait for kubelet
	I1013 22:14:45.792878  431413 kubeadm.go:586] duration metric: took 44.92997648s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:14:45.792912  431413 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:14:45.796717  431413 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:14:45.796797  431413 node_conditions.go:123] node cpu capacity is 2
	I1013 22:14:45.796825  431413 node_conditions.go:105] duration metric: took 3.891311ms to run NodePressure ...
	I1013 22:14:45.796852  431413 start.go:241] waiting for startup goroutines ...
	I1013 22:14:45.964281  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:46.136727  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:46.206446  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:46.206783  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:46.463583  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:46.636367  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:46.705474  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:46.706242  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:46.963254  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:47.136335  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:47.206037  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:47.206413  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:47.463878  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:47.636445  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:47.706356  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:47.706743  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:47.963460  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:48.135853  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:48.205225  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:48.205700  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:48.463948  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:48.636452  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:48.705445  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:48.705579  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:48.963605  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:49.136045  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:49.205746  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:49.205900  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:49.462828  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:49.636105  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:49.705639  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:49.705973  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:49.963232  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:50.136833  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:50.205609  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:50.205740  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:50.463621  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:50.635379  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:50.705439  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:50.705921  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:50.963045  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:51.137026  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:51.206465  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:51.206590  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:51.464761  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:51.644074  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:51.705251  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:51.708291  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:51.964698  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:52.145708  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:52.214939  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:52.217104  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:52.464283  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:52.651663  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:52.714077  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:52.714494  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:52.963598  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:53.137725  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:53.210747  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:53.212299  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:53.463366  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:53.637736  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:53.707499  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:53.708013  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:53.964963  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:54.156250  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:54.254512  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:54.254672  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:54.464844  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:54.639485  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:54.705293  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:54.705449  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:54.964115  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:55.137044  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:55.206108  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:55.206215  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:55.463331  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:55.638157  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:55.706467  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:55.706878  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:55.964154  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:56.136463  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:56.206018  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:56.206274  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:56.463363  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:56.636269  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:56.706625  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:56.706723  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:56.963799  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:57.136323  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:57.205792  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:57.206821  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:57.465573  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:57.635880  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:57.705780  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:57.705934  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:57.962730  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:58.135698  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:58.214140  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:58.215939  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:58.463192  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:58.636212  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:58.706041  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:58.706600  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:58.963754  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:59.136563  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:59.208274  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:59.208917  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:59.462933  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:59.636441  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:59.706416  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:59.707325  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:59.764660  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:59.963036  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:00.136997  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:00.206554  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:00.207192  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:00.470089  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:00.649265  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:00.763625  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:00.764888  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:00.963979  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:01.137477  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:01.205975  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:01.206557  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:01.275543  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.510786866s)
	W1013 22:15:01.275591  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:15:01.275617  431413 retry.go:31] will retry after 13.273918113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:15:01.463687  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:01.637813  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:01.706972  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:01.708270  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:01.963935  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:02.136974  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:02.206787  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:02.207009  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:02.464472  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:02.637349  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:02.705920  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:02.706053  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:02.963480  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:03.136442  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:03.206615  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:03.207069  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:03.463493  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:03.637066  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:03.707163  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:03.707606  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:03.964338  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:04.137361  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:04.206627  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:04.207154  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:04.464586  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:04.636329  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:04.706136  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:04.706688  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:04.964204  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:05.137368  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:05.206458  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:05.206780  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:05.468518  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:05.636100  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:05.727561  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:05.728128  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:05.964363  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:06.136960  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:06.207298  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:06.208414  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:06.464543  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:06.636583  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:06.708887  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:06.709308  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:06.964130  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:07.136988  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:07.205333  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:07.206367  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:07.463860  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:07.637206  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:07.706725  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:07.706839  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:07.963688  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:08.136443  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:08.205132  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:08.205906  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:08.463783  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:08.636256  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:08.706208  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:08.706667  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:08.964288  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:09.136877  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:09.205970  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:09.206116  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:09.463306  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:09.636455  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:09.705332  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:09.706009  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:09.963303  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:10.138403  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:10.238358  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:10.238553  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:10.463967  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:10.636782  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:10.737220  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:10.737401  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:10.963796  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:11.136621  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:11.206291  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:11.206911  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:11.463987  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:11.636440  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:11.704582  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:11.705350  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:11.963674  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:12.136400  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:12.208275  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:12.208751  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:12.464727  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:12.636251  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:12.704685  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:12.704896  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:12.963046  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:13.136166  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:13.206227  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:13.206750  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:13.464745  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:13.638041  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:13.708483  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:13.708887  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:13.963811  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:14.136983  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:14.206610  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:14.206803  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:14.463851  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:14.550096  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:15:14.635761  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:14.705746  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:14.705796  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:14.964066  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:15.137248  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:15.238086  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:15.238580  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:15.464278  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:15.637185  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:15.707066  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:15.707551  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:15.964657  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:16.047117  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.496931923s)
	W1013 22:15:16.047436  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:15:16.047543  431413 retry.go:31] will retry after 30.313623634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:15:16.136776  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:16.206477  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:16.206661  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:16.464166  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:16.636558  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:16.706382  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:16.706653  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:16.963538  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:17.135834  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:17.205677  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:17.205882  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:17.463812  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:17.635982  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:17.705915  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:17.706142  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:17.965755  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:18.136855  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:18.206065  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:18.206680  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:18.464549  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:18.636582  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:18.706361  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:18.707718  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:18.964042  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:19.137013  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:19.206337  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:19.206317  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:19.464348  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:19.636393  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:19.706934  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:19.707241  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:19.963295  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:20.137306  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:20.205981  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:20.206190  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:20.463702  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:20.636155  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:20.706172  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:20.706794  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:20.963940  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:21.137221  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:21.206609  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:21.207005  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:21.463255  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:21.639106  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:21.708792  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:21.709051  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:21.962818  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:22.136747  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:22.206550  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:22.206709  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:22.464410  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:22.636560  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:22.705987  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:22.706121  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:22.962787  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:23.136184  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:23.205210  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:23.205388  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:23.464144  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:23.637187  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:23.706803  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:23.706965  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:23.963407  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:24.135936  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:24.207255  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:24.207477  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:24.464338  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:24.637304  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:24.705552  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:24.705693  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:24.963639  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:25.137881  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:25.205785  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:25.208516  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:25.464021  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:25.636566  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:25.708588  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:25.708964  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:25.965723  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:26.136086  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:26.206057  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:26.206514  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:26.463885  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:26.636250  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:26.704593  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:26.704818  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:26.964038  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:27.136922  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:27.205276  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:27.205539  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:27.463683  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:27.635794  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:27.705837  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:27.706080  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:27.963895  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:28.141430  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:28.206269  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:28.206685  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:28.464165  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:28.636372  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:28.705524  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:28.705647  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:28.964790  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:29.136190  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:29.205936  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:29.206843  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:29.464298  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:29.637377  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:29.706353  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:29.706936  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:29.963148  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:30.137387  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:30.205558  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:30.206545  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:30.487451  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:30.637961  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:30.706300  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:30.706430  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:30.963199  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:31.136976  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:31.205723  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:31.206622  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:31.463712  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:31.636444  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:31.705763  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:31.706250  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:31.963155  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:32.144754  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:32.206088  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:32.206406  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:32.463713  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:32.636001  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:32.704901  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:32.705027  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:32.963263  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:33.136356  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:33.206048  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:33.206142  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:33.463172  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:33.636665  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:33.705271  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:33.705870  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:33.964140  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:34.136475  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:34.205410  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:34.207259  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:34.464133  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:34.636917  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:34.706728  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:34.707228  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:34.963012  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:35.136332  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:35.206909  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:35.206988  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:35.463192  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:35.636968  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:35.705109  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:35.705301  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:35.963794  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:36.136253  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:36.204547  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:36.205170  431413 kapi.go:107] duration metric: took 1m28.503963664s to wait for kubernetes.io/minikube-addons=registry ...
	I1013 22:15:36.462964  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:36.637480  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:36.705527  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:36.964254  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:37.136380  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:37.204667  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:37.464291  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:37.637300  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:37.705868  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:37.963154  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:38.136798  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:38.204910  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:38.463414  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:38.638250  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:38.705754  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:38.973342  431413 kapi.go:107] duration metric: took 1m29.0133146s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 22:15:38.976529  431413 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-801288 cluster.
	I1013 22:15:38.979441  431413 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 22:15:38.982404  431413 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 22:15:39.135897  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:39.205213  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:39.636740  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:39.704900  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:40.136853  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:40.205500  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:40.636282  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:40.705340  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:41.136533  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:41.204627  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:41.636962  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:41.705461  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:42.138389  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:42.205429  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:42.637248  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:42.705766  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:43.142922  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:43.207161  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:43.635861  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:43.705437  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:44.139571  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:44.204887  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:44.637009  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:44.705345  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:45.137227  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:45.208221  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:45.636327  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:45.704848  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:46.137934  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:46.206768  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:46.362141  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:15:46.636890  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:46.738761  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:47.140503  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:47.207057  431413 kapi.go:107] duration metric: took 1m39.505850505s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 22:15:47.636171  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:47.658633  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.296387256s)
	W1013 22:15:47.658669  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 22:15:47.658751  431413 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 22:15:48.230823  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:48.636920  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:49.137425  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:49.635552  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:50.137143  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:50.638159  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:51.141126  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:51.639473  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:52.136606  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:52.646043  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:53.136228  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:53.636637  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:54.138464  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:54.636346  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:55.136596  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:55.636878  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:56.137221  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:56.636309  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:57.136198  431413 kapi.go:107] duration metric: took 1m49.003611681s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 22:15:57.139348  431413 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, registry-creds, ingress-dns, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1013 22:15:57.142328  431413 addons.go:514] duration metric: took 1m56.278915408s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner nvidia-device-plugin registry-creds ingress-dns storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1013 22:15:57.142379  431413 start.go:246] waiting for cluster config update ...
	I1013 22:15:57.142400  431413 start.go:255] writing updated cluster config ...
	I1013 22:15:57.142700  431413 ssh_runner.go:195] Run: rm -f paused
	I1013 22:15:57.146295  431413 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:15:57.150126  431413 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-25z8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.155035  431413 pod_ready.go:94] pod "coredns-66bc5c9577-25z8n" is "Ready"
	I1013 22:15:57.155062  431413 pod_ready.go:86] duration metric: took 4.908795ms for pod "coredns-66bc5c9577-25z8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.157566  431413 pod_ready.go:83] waiting for pod "etcd-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.161995  431413 pod_ready.go:94] pod "etcd-addons-801288" is "Ready"
	I1013 22:15:57.162071  431413 pod_ready.go:86] duration metric: took 4.478682ms for pod "etcd-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.164719  431413 pod_ready.go:83] waiting for pod "kube-apiserver-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.169788  431413 pod_ready.go:94] pod "kube-apiserver-addons-801288" is "Ready"
	I1013 22:15:57.169819  431413 pod_ready.go:86] duration metric: took 5.068536ms for pod "kube-apiserver-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.172375  431413 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.550754  431413 pod_ready.go:94] pod "kube-controller-manager-addons-801288" is "Ready"
	I1013 22:15:57.550781  431413 pod_ready.go:86] duration metric: took 378.381672ms for pod "kube-controller-manager-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.750753  431413 pod_ready.go:83] waiting for pod "kube-proxy-8c9vh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:58.150695  431413 pod_ready.go:94] pod "kube-proxy-8c9vh" is "Ready"
	I1013 22:15:58.150731  431413 pod_ready.go:86] duration metric: took 399.952442ms for pod "kube-proxy-8c9vh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:58.351247  431413 pod_ready.go:83] waiting for pod "kube-scheduler-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:58.750282  431413 pod_ready.go:94] pod "kube-scheduler-addons-801288" is "Ready"
	I1013 22:15:58.750312  431413 pod_ready.go:86] duration metric: took 399.037827ms for pod "kube-scheduler-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:58.750326  431413 pod_ready.go:40] duration metric: took 1.603996798s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:15:58.812267  431413 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:15:58.815531  431413 out.go:179] * Done! kubectl is now configured to use "addons-801288" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 22:18:56 addons-801288 crio[825]: time="2025-10-13T22:18:56.313207931Z" level=info msg="Removed container 39acabc7fb60c3d029669641990893e178cd382a41babfa1a04bbb1e2f1eab86: kube-system/registry-creds-764b6fb674-2kdj8/registry-creds" id=fe1416fe-1759-4fe5-bcd9-7b347d057e00 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.431129004Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-f2smx/POD" id=187d4be7-bc5a-41b5-a95b-cacab152aba8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.431203825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.443441993Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-f2smx Namespace:default ID:21a6da375d100205216d10ba54d2c27170be90c97d3ccc21ee1eac57d8528053 UID:857e8424-cca1-42e3-9273-776e74b7ed6e NetNS:/var/run/netns/62f77e57-cfc8-42db-97f9-72d43ab77377 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049f9e8}] Aliases:map[]}"
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.443667712Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-f2smx to CNI network \"kindnet\" (type=ptp)"
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.461506676Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-f2smx Namespace:default ID:21a6da375d100205216d10ba54d2c27170be90c97d3ccc21ee1eac57d8528053 UID:857e8424-cca1-42e3-9273-776e74b7ed6e NetNS:/var/run/netns/62f77e57-cfc8-42db-97f9-72d43ab77377 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049f9e8}] Aliases:map[]}"
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.462124781Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-f2smx for CNI network kindnet (type=ptp)"
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.466933149Z" level=info msg="Ran pod sandbox 21a6da375d100205216d10ba54d2c27170be90c97d3ccc21ee1eac57d8528053 with infra container: default/hello-world-app-5d498dc89-f2smx/POD" id=187d4be7-bc5a-41b5-a95b-cacab152aba8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.471522321Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e0bebf2d-195f-4e9c-a0c9-4599fb0f8782 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.472170309Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e0bebf2d-195f-4e9c-a0c9-4599fb0f8782 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.472559733Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=e0bebf2d-195f-4e9c-a0c9-4599fb0f8782 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.477599309Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=2fc143ed-2899-400a-93a7-0233f3603017 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:19:00 addons-801288 crio[825]: time="2025-10-13T22:19:00.482366981Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.129847587Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=2fc143ed-2899-400a-93a7-0233f3603017 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.130753449Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f802b6c1-03ba-4f5b-8c9d-b5308f557967 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.137412727Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2f155cb1-be3e-43aa-bc69-c1588a0129ea name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.143710126Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-f2smx/hello-world-app" id=a3aca71b-4dac-416d-a744-4bfef4aa22ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.144853202Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.157602916Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.157975439Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d55a7e576d094ab886c83c40ab3ce527ec9d621b61135f0f13250a9720f6083a/merged/etc/passwd: no such file or directory"
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.158083014Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d55a7e576d094ab886c83c40ab3ce527ec9d621b61135f0f13250a9720f6083a/merged/etc/group: no such file or directory"
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.158415882Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.19955805Z" level=info msg="Created container 0493b4ec0d4521a8b8531388159e8e892308f214acf03663cdc5e5817e79f6ce: default/hello-world-app-5d498dc89-f2smx/hello-world-app" id=a3aca71b-4dac-416d-a744-4bfef4aa22ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.201038506Z" level=info msg="Starting container: 0493b4ec0d4521a8b8531388159e8e892308f214acf03663cdc5e5817e79f6ce" id=2b7422de-2785-4052-988a-322a1391e38f name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:19:01 addons-801288 crio[825]: time="2025-10-13T22:19:01.203388049Z" level=info msg="Started container" PID=7126 containerID=0493b4ec0d4521a8b8531388159e8e892308f214acf03663cdc5e5817e79f6ce description=default/hello-world-app-5d498dc89-f2smx/hello-world-app id=2b7422de-2785-4052-988a-322a1391e38f name=/runtime.v1.RuntimeService/StartContainer sandboxID=21a6da375d100205216d10ba54d2c27170be90c97d3ccc21ee1eac57d8528053
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	0493b4ec0d452       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   21a6da375d100       hello-world-app-5d498dc89-f2smx             default
	a8db30564f0ff       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             6 seconds ago            Exited              registry-creds                           1                   e55a0e09f6075       registry-creds-764b6fb674-2kdj8             kube-system
	e86736cb9eb8d       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   f23a65e0a8ac6       nginx                                       default
	e8da23ef29e9d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   f6282bc88ae6b       busybox                                     default
	f153bd237ffa7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	871bc19c45720       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	aa5d77a451b8b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	df4b38a9a0c59       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	923be75bce0db       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   54d8860d65176       gadget-rhjv9                                gadget
	1d5282a83ae15       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   0187e7c9ea6e5       ingress-nginx-controller-675c5ddd98-g57b8   ingress-nginx
	e2d00394869df       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	8c8b8301b714f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   5bd667958e19a       gcp-auth-78565c9fb4-4pzcx                   gcp-auth
	dd6d3965841ed       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   5929c54b286f8       registry-proxy-528wh                        kube-system
	bacb39f90a23b       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   dca3cc770403c       cloud-spanner-emulator-86bd5cbb97-hskxm     default
	d6de93ce6a1b7       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   85e3b0401215c       nvidia-device-plugin-daemonset-wnwll        kube-system
	3c2edf4d8430b       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   b404624a65037       registry-6b586f9694-7nvd4                   kube-system
	f6e30d8af3b56       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   77a417c5fe55e       snapshot-controller-7d9fbc56b8-kbw7j        kube-system
	1b60be6e9e6c2       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   4d119e8734ea6       snapshot-controller-7d9fbc56b8-ltgt2        kube-system
	d5134fdc018a5       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   f8d3d47176b06       csi-hostpath-resizer-0                      kube-system
	7c917abe8d5f4       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   a7a08147550a7       csi-hostpath-attacher-0                     kube-system
	7f49cfff22d36       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   812d421b9aa0c       kube-ingress-dns-minikube                   kube-system
	c287771e032e0       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   2536d9e1babfe       local-path-provisioner-648f6765c9-9zzrw     local-path-storage
	473c0a66370cb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              patch                                    0                   09d2a2305791b       ingress-nginx-admission-patch-2rvhh         ingress-nginx
	76b89318f9c3f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              create                                   0                   fbc9f92318a2c       ingress-nginx-admission-create-pr575        ingress-nginx
	6cec628f84ed1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago            Running             csi-external-health-monitor-controller   0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	e6995f51e4b11       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   e3633df7f341f       metrics-server-85b7d694d7-5289b             kube-system
	21350e9dbc830       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   0c989c368ef64       yakd-dashboard-5ff678cb9-z9pmq              yakd-dashboard
	1835a21d66fa2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   06167605247a4       coredns-66bc5c9577-25z8n                    kube-system
	c559aae25c459       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   15ed4168fb425       storage-provisioner                         kube-system
	44caccd237f7a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   68752e2dc6f6a       kube-proxy-8c9vh                            kube-system
	225be8120336e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   ce0ea667d9fcb       kindnet-lqsl4                               kube-system
	3c07379b01c2b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   95ac2a1773d77       kube-scheduler-addons-801288                kube-system
	6a94f2e155481       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   0329529ebc88a       kube-controller-manager-addons-801288       kube-system
	6757789a08c6d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   8b7764dc93382       kube-apiserver-addons-801288                kube-system
	ac07affd57c99       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   ea532b566913d       etcd-addons-801288                          kube-system
	
	
	==> coredns [1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d] <==
	[INFO] 10.244.0.14:50844 - 4761 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001977283s
	[INFO] 10.244.0.14:50844 - 24274 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000101176s
	[INFO] 10.244.0.14:50844 - 52715 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000120359s
	[INFO] 10.244.0.14:42997 - 50720 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000167972s
	[INFO] 10.244.0.14:42997 - 50949 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168743s
	[INFO] 10.244.0.14:52663 - 7414 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122484s
	[INFO] 10.244.0.14:52663 - 7611 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000105212s
	[INFO] 10.244.0.14:51715 - 32418 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098304s
	[INFO] 10.244.0.14:51715 - 32237 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009517s
	[INFO] 10.244.0.14:50625 - 17194 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001211726s
	[INFO] 10.244.0.14:50625 - 17639 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001407209s
	[INFO] 10.244.0.14:42614 - 10998 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011217s
	[INFO] 10.244.0.14:42614 - 10578 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147591s
	[INFO] 10.244.0.19:42533 - 58712 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017397s
	[INFO] 10.244.0.19:51824 - 22655 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000145523s
	[INFO] 10.244.0.19:47677 - 62217 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000246166s
	[INFO] 10.244.0.19:35832 - 54231 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000223257s
	[INFO] 10.244.0.19:37666 - 24937 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014284s
	[INFO] 10.244.0.19:34194 - 30698 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000155648s
	[INFO] 10.244.0.19:51192 - 60390 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002334288s
	[INFO] 10.244.0.19:45068 - 49505 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001931622s
	[INFO] 10.244.0.19:50199 - 21187 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001753902s
	[INFO] 10.244.0.19:39136 - 15447 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00181306s
	[INFO] 10.244.0.23:48668 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000177466s
	[INFO] 10.244.0.23:45761 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000187722s
	
	
	==> describe nodes <==
	Name:               addons-801288
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-801288
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=addons-801288
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_13_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-801288
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-801288"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:13:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-801288
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:18:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:17:30 +0000   Mon, 13 Oct 2025 22:13:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:17:30 +0000   Mon, 13 Oct 2025 22:13:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:17:30 +0000   Mon, 13 Oct 2025 22:13:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:17:30 +0000   Mon, 13 Oct 2025 22:14:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-801288
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                16256a10-42da-4126-a586-4dbee9443032
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     cloud-spanner-emulator-86bd5cbb97-hskxm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  default                     hello-world-app-5d498dc89-f2smx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-rhjv9                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  gcp-auth                    gcp-auth-78565c9fb4-4pzcx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-g57b8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m55s
	  kube-system                 coredns-66bc5c9577-25z8n                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m1s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 csi-hostpathplugin-9mzk9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 etcd-addons-801288                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m6s
	  kube-system                 kindnet-lqsl4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m1s
	  kube-system                 kube-apiserver-addons-801288                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-controller-manager-addons-801288        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-proxy-8c9vh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-addons-801288                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 metrics-server-85b7d694d7-5289b              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m57s
	  kube-system                 nvidia-device-plugin-daemonset-wnwll         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 registry-6b586f9694-7nvd4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 registry-creds-764b6fb674-2kdj8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 registry-proxy-528wh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 snapshot-controller-7d9fbc56b8-kbw7j         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 snapshot-controller-7d9fbc56b8-ltgt2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  local-path-storage          local-path-provisioner-648f6765c9-9zzrw      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-z9pmq               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m59s                  kube-proxy       
	  Normal   Starting                 5m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node addons-801288 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node addons-801288 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m13s (x8 over 5m13s)  kubelet          Node addons-801288 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m6s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m6s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m6s                   kubelet          Node addons-801288 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m6s                   kubelet          Node addons-801288 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m6s                   kubelet          Node addons-801288 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m2s                   node-controller  Node addons-801288 event: Registered Node addons-801288 in Controller
	  Normal   NodeReady                4m19s                  kubelet          Node addons-801288 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct13 21:01] hrtimer: interrupt took 13518544 ns
	[Oct13 22:12] kauditd_printk_skb: 8 callbacks suppressed
	[Oct13 22:13] overlayfs: idmapped layers are currently not supported
	[  +0.064178] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825] <==
	{"level":"warn","ts":"2025-10-13T22:13:52.584040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.601335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.619360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.637253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.659817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.677561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.693646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.712277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.741320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.747797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.772714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.787184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.800825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.815176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.829827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.857604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.872595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.890406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.952309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:08.337331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:08.354612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:30.644392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:30.660387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:30.696971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:30.701056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55780","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [8c8b8301b714fa679f1c44dfb22722f50e8de8ee4701d1c90f7528f1db1ff614] <==
	2025/10/13 22:15:38 GCP Auth Webhook started!
	2025/10/13 22:15:59 Ready to marshal response ...
	2025/10/13 22:15:59 Ready to write response ...
	2025/10/13 22:15:59 Ready to marshal response ...
	2025/10/13 22:15:59 Ready to write response ...
	2025/10/13 22:15:59 Ready to marshal response ...
	2025/10/13 22:15:59 Ready to write response ...
	2025/10/13 22:16:21 Ready to marshal response ...
	2025/10/13 22:16:21 Ready to write response ...
	2025/10/13 22:16:23 Ready to marshal response ...
	2025/10/13 22:16:23 Ready to write response ...
	2025/10/13 22:16:39 Ready to marshal response ...
	2025/10/13 22:16:39 Ready to write response ...
	2025/10/13 22:16:40 Ready to marshal response ...
	2025/10/13 22:16:40 Ready to write response ...
	2025/10/13 22:17:03 Ready to marshal response ...
	2025/10/13 22:17:03 Ready to write response ...
	2025/10/13 22:17:03 Ready to marshal response ...
	2025/10/13 22:17:03 Ready to write response ...
	2025/10/13 22:17:11 Ready to marshal response ...
	2025/10/13 22:17:11 Ready to write response ...
	2025/10/13 22:18:59 Ready to marshal response ...
	2025/10/13 22:18:59 Ready to write response ...
	
	
	==> kernel <==
	 22:19:02 up  2:01,  0 user,  load average: 0.63, 2.24, 3.39
	Linux addons-801288 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b] <==
	I1013 22:16:52.504039       1 main.go:301] handling current node
	I1013 22:17:02.504056       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:17:02.504094       1 main.go:301] handling current node
	I1013 22:17:12.504686       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:17:12.504721       1 main.go:301] handling current node
	I1013 22:17:22.511204       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:17:22.511237       1 main.go:301] handling current node
	I1013 22:17:32.504315       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:17:32.504449       1 main.go:301] handling current node
	I1013 22:17:42.511041       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:17:42.511133       1 main.go:301] handling current node
	I1013 22:17:52.509354       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:17:52.509389       1 main.go:301] handling current node
	I1013 22:18:02.504111       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:18:02.504154       1 main.go:301] handling current node
	I1013 22:18:12.507188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:18:12.507224       1 main.go:301] handling current node
	I1013 22:18:22.513337       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:18:22.513372       1 main.go:301] handling current node
	I1013 22:18:32.513071       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:18:32.513106       1 main.go:301] handling current node
	I1013 22:18:42.504288       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:18:42.504324       1 main.go:301] handling current node
	I1013 22:18:52.512360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:18:52.512469       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a] <==
	W1013 22:14:30.686365       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1013 22:14:30.700521       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 22:14:43.077311       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.55.156:443: connect: connection refused
	E1013 22:14:43.077369       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.55.156:443: connect: connection refused" logger="UnhandledError"
	W1013 22:14:43.078127       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.55.156:443: connect: connection refused
	E1013 22:14:43.078178       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.55.156:443: connect: connection refused" logger="UnhandledError"
	W1013 22:14:43.154332       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.55.156:443: connect: connection refused
	E1013 22:14:43.154483       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.55.156:443: connect: connection refused" logger="UnhandledError"
	E1013 22:14:54.346246       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.217.225:443: connect: connection refused" logger="UnhandledError"
	W1013 22:14:54.346445       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 22:14:54.346545       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1013 22:14:54.347675       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.217.225:443: connect: connection refused" logger="UnhandledError"
	E1013 22:14:54.352814       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.217.225:443: connect: connection refused" logger="UnhandledError"
	I1013 22:14:54.451983       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1013 22:16:08.936101       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48736: use of closed network connection
	E1013 22:16:09.160865       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48766: use of closed network connection
	E1013 22:16:09.295879       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48772: use of closed network connection
	I1013 22:16:34.296782       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1013 22:16:36.225137       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1013 22:16:39.766045       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1013 22:16:40.062029       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.9.52"}
	I1013 22:19:00.167477       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.155.139"}
	
	
	==> kube-controller-manager [6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4] <==
	I1013 22:14:00.648969       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:14:00.651327       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:14:00.656613       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:14:00.659753       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 22:14:00.670123       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:14:00.670211       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:14:00.670228       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:14:00.671075       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:14:00.670547       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:14:00.670561       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:14:00.671173       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:14:00.671192       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:14:00.670250       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:14:00.677249       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:14:00.677357       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:14:00.677390       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1013 22:14:05.700791       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1013 22:14:30.636690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 22:14:30.636844       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1013 22:14:30.636891       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1013 22:14:30.667256       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1013 22:14:30.671796       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1013 22:14:30.737425       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:14:30.773159       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:14:45.625797       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb] <==
	I1013 22:14:02.602877       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:14:02.687287       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:14:02.788224       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:14:02.788259       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 22:14:02.788339       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:14:02.818064       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:14:02.818128       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:14:02.825005       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:14:02.825268       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:14:02.825281       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:14:02.826712       1 config.go:200] "Starting service config controller"
	I1013 22:14:02.826722       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:14:02.826748       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:14:02.826752       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:14:02.826763       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:14:02.826770       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:14:02.828800       1 config.go:309] "Starting node config controller"
	I1013 22:14:02.828811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:14:02.828817       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:14:02.927145       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:14:02.927179       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:14:02.927213       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641] <==
	E1013 22:13:53.680771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 22:13:53.680827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 22:13:53.680885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 22:13:53.686609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:13:53.686707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 22:13:53.686807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 22:13:53.686863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 22:13:53.686914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 22:13:53.686962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:13:53.687005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 22:13:53.687093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:13:53.687148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 22:13:53.687197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 22:13:53.687246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 22:13:53.687325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 22:13:54.593673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 22:13:54.629187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 22:13:54.649321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:13:54.650019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:13:54.685983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 22:13:54.760791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 22:13:54.869763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1013 22:13:54.891711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:13:54.902550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1013 22:13:57.232134       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:17:13 addons-801288 kubelet[1257]: I1013 22:17:13.657648    1257 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/d808a40e-573f-4a2e-acdf-bc902a9eafa8-data\") on node \"addons-801288\" DevicePath \"\""
	Oct 13 22:17:13 addons-801288 kubelet[1257]: I1013 22:17:13.657692    1257 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/d808a40e-573f-4a2e-acdf-bc902a9eafa8-script\") on node \"addons-801288\" DevicePath \"\""
	Oct 13 22:17:13 addons-801288 kubelet[1257]: I1013 22:17:13.657704    1257 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d808a40e-573f-4a2e-acdf-bc902a9eafa8-gcp-creds\") on node \"addons-801288\" DevicePath \"\""
	Oct 13 22:17:13 addons-801288 kubelet[1257]: I1013 22:17:13.657717    1257 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6chlb\" (UniqueName: \"kubernetes.io/projected/d808a40e-573f-4a2e-acdf-bc902a9eafa8-kube-api-access-6chlb\") on node \"addons-801288\" DevicePath \"\""
	Oct 13 22:17:14 addons-801288 kubelet[1257]: I1013 22:17:14.235679    1257 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d808a40e-573f-4a2e-acdf-bc902a9eafa8" path="/var/lib/kubelet/pods/d808a40e-573f-4a2e-acdf-bc902a9eafa8/volumes"
	Oct 13 22:17:14 addons-801288 kubelet[1257]: I1013 22:17:14.378694    1257 scope.go:117] "RemoveContainer" containerID="ddcf8dd64aba321ba1369a632d45ecf15b183e45d504b0be61acc535be5614c5"
	Oct 13 22:17:56 addons-801288 kubelet[1257]: I1013 22:17:56.256917    1257 scope.go:117] "RemoveContainer" containerID="fccd4cf16f54d53181a284af16667e04caeed2cc026386b87b51a60cf2e743a3"
	Oct 13 22:17:56 addons-801288 kubelet[1257]: I1013 22:17:56.267270    1257 scope.go:117] "RemoveContainer" containerID="b134e685bac6eb682486d75fd0cd1b6699a592b83eb9a85c76723ea52fdee2cc"
	Oct 13 22:18:02 addons-801288 kubelet[1257]: I1013 22:18:02.232239    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-7nvd4" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:18:18 addons-801288 kubelet[1257]: I1013 22:18:18.232321    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wnwll" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:18:25 addons-801288 kubelet[1257]: I1013 22:18:25.232486    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-528wh" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:18:53 addons-801288 kubelet[1257]: I1013 22:18:53.432686    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2kdj8" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:18:53 addons-801288 kubelet[1257]: W1013 22:18:53.469645    1257 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077/crio-e55a0e09f6075d4331828f1e5214e7e5963b8c4376babc7af9d26ed37cf76b70 WatchSource:0}: Error finding container e55a0e09f6075d4331828f1e5214e7e5963b8c4376babc7af9d26ed37cf76b70: Status 404 returned error can't find the container with id e55a0e09f6075d4331828f1e5214e7e5963b8c4376babc7af9d26ed37cf76b70
	Oct 13 22:18:55 addons-801288 kubelet[1257]: I1013 22:18:55.741228    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2kdj8" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:18:55 addons-801288 kubelet[1257]: I1013 22:18:55.741292    1257 scope.go:117] "RemoveContainer" containerID="39acabc7fb60c3d029669641990893e178cd382a41babfa1a04bbb1e2f1eab86"
	Oct 13 22:18:56 addons-801288 kubelet[1257]: I1013 22:18:56.297464    1257 scope.go:117] "RemoveContainer" containerID="39acabc7fb60c3d029669641990893e178cd382a41babfa1a04bbb1e2f1eab86"
	Oct 13 22:18:56 addons-801288 kubelet[1257]: I1013 22:18:56.746744    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2kdj8" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:18:56 addons-801288 kubelet[1257]: I1013 22:18:56.746801    1257 scope.go:117] "RemoveContainer" containerID="a8db30564f0ff8dea96b40a4d6de805c4ae8ef24bd8c3469c72a10e3600eaf75"
	Oct 13 22:18:56 addons-801288 kubelet[1257]: E1013 22:18:56.746948    1257 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-2kdj8_kube-system(eb708d02-0e37-40d2-a8b8-804e0e89f091)\"" pod="kube-system/registry-creds-764b6fb674-2kdj8" podUID="eb708d02-0e37-40d2-a8b8-804e0e89f091"
	Oct 13 22:18:57 addons-801288 kubelet[1257]: I1013 22:18:57.751005    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-2kdj8" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:18:57 addons-801288 kubelet[1257]: I1013 22:18:57.751545    1257 scope.go:117] "RemoveContainer" containerID="a8db30564f0ff8dea96b40a4d6de805c4ae8ef24bd8c3469c72a10e3600eaf75"
	Oct 13 22:18:57 addons-801288 kubelet[1257]: E1013 22:18:57.751780    1257 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-2kdj8_kube-system(eb708d02-0e37-40d2-a8b8-804e0e89f091)\"" pod="kube-system/registry-creds-764b6fb674-2kdj8" podUID="eb708d02-0e37-40d2-a8b8-804e0e89f091"
	Oct 13 22:18:59 addons-801288 kubelet[1257]: I1013 22:18:59.968087    1257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6glk\" (UniqueName: \"kubernetes.io/projected/857e8424-cca1-42e3-9273-776e74b7ed6e-kube-api-access-m6glk\") pod \"hello-world-app-5d498dc89-f2smx\" (UID: \"857e8424-cca1-42e3-9273-776e74b7ed6e\") " pod="default/hello-world-app-5d498dc89-f2smx"
	Oct 13 22:18:59 addons-801288 kubelet[1257]: I1013 22:18:59.968620    1257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/857e8424-cca1-42e3-9273-776e74b7ed6e-gcp-creds\") pod \"hello-world-app-5d498dc89-f2smx\" (UID: \"857e8424-cca1-42e3-9273-776e74b7ed6e\") " pod="default/hello-world-app-5d498dc89-f2smx"
	Oct 13 22:19:01 addons-801288 kubelet[1257]: I1013 22:19:01.791632    1257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-f2smx" podStartSLOduration=2.133291716 podStartE2EDuration="2.791613987s" podCreationTimestamp="2025-10-13 22:18:59 +0000 UTC" firstStartedPulling="2025-10-13 22:19:00.474447476 +0000 UTC m=+304.402953162" lastFinishedPulling="2025-10-13 22:19:01.132769747 +0000 UTC m=+305.061275433" observedRunningTime="2025-10-13 22:19:01.791284327 +0000 UTC m=+305.719790021" watchObservedRunningTime="2025-10-13 22:19:01.791613987 +0000 UTC m=+305.720119681"
	
	
	==> storage-provisioner [c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b] <==
	W1013 22:18:37.390080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:39.392617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:39.399383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:41.402718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:41.407819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:43.410984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:43.415520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:45.419978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:45.428620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:47.432269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:47.438433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:49.441722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:49.446270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:51.449624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:51.456713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:53.460408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:53.477817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:55.482038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:55.486472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:57.489546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:57.497461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:59.500853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:18:59.507847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:19:01.511239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:19:01.519259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-801288 -n addons-801288
helpers_test.go:269: (dbg) Run:  kubectl --context addons-801288 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-pr575 ingress-nginx-admission-patch-2rvhh
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-801288 describe pod ingress-nginx-admission-create-pr575 ingress-nginx-admission-patch-2rvhh
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-801288 describe pod ingress-nginx-admission-create-pr575 ingress-nginx-admission-patch-2rvhh: exit status 1 (92.810612ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pr575" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2rvhh" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-801288 describe pod ingress-nginx-admission-create-pr575 ingress-nginx-admission-patch-2rvhh: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (265.729522ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:19:03.704164  440965 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:19:03.704941  440965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:19:03.704960  440965 out.go:374] Setting ErrFile to fd 2...
	I1013 22:19:03.704967  440965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:19:03.705282  440965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:19:03.705635  440965 mustload.go:65] Loading cluster: addons-801288
	I1013 22:19:03.706033  440965 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:19:03.706056  440965 addons.go:606] checking whether the cluster is paused
	I1013 22:19:03.706200  440965 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:19:03.706224  440965 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:19:03.706726  440965 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:19:03.724399  440965 ssh_runner.go:195] Run: systemctl --version
	I1013 22:19:03.724458  440965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:19:03.743299  440965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:19:03.849789  440965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:19:03.849870  440965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:19:03.880278  440965 cri.go:89] found id: "a8db30564f0ff8dea96b40a4d6de805c4ae8ef24bd8c3469c72a10e3600eaf75"
	I1013 22:19:03.880298  440965 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:19:03.880304  440965 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:19:03.880308  440965 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:19:03.880311  440965 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:19:03.880315  440965 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:19:03.880319  440965 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:19:03.880322  440965 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:19:03.880325  440965 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:19:03.880331  440965 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:19:03.880335  440965 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:19:03.880338  440965 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:19:03.880341  440965 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:19:03.880344  440965 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:19:03.880347  440965 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:19:03.880353  440965 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:19:03.880356  440965 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:19:03.880360  440965 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:19:03.880363  440965 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:19:03.880366  440965 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:19:03.880373  440965 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:19:03.880376  440965 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:19:03.880379  440965 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:19:03.880382  440965 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:19:03.880385  440965 cri.go:89] found id: ""
	I1013 22:19:03.880435  440965 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:19:03.896025  440965 out.go:203] 
	W1013 22:19:03.898991  440965 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:19:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:19:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:19:03.899010  440965 out.go:285] * 
	* 
	W1013 22:19:03.905575  440965 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:19:03.908614  440965 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable ingress --alsologtostderr -v=1: exit status 11 (270.130611ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:19:03.964913  441009 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:19:03.965854  441009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:19:03.965876  441009 out.go:374] Setting ErrFile to fd 2...
	I1013 22:19:03.965882  441009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:19:03.966219  441009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:19:03.966558  441009 mustload.go:65] Loading cluster: addons-801288
	I1013 22:19:03.967020  441009 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:19:03.967037  441009 addons.go:606] checking whether the cluster is paused
	I1013 22:19:03.967246  441009 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:19:03.967290  441009 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:19:03.976184  441009 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:19:03.994453  441009 ssh_runner.go:195] Run: systemctl --version
	I1013 22:19:03.994594  441009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:19:04.014796  441009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:19:04.117966  441009 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:19:04.118052  441009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:19:04.150378  441009 cri.go:89] found id: "a8db30564f0ff8dea96b40a4d6de805c4ae8ef24bd8c3469c72a10e3600eaf75"
	I1013 22:19:04.150401  441009 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:19:04.150407  441009 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:19:04.150411  441009 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:19:04.150415  441009 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:19:04.150419  441009 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:19:04.150422  441009 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:19:04.150426  441009 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:19:04.150429  441009 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:19:04.150436  441009 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:19:04.150440  441009 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:19:04.150443  441009 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:19:04.150446  441009 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:19:04.150450  441009 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:19:04.150454  441009 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:19:04.150460  441009 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:19:04.150466  441009 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:19:04.150471  441009 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:19:04.150475  441009 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:19:04.150478  441009 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:19:04.150484  441009 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:19:04.150487  441009 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:19:04.150491  441009 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:19:04.150494  441009 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:19:04.150497  441009 cri.go:89] found id: ""
	I1013 22:19:04.150555  441009 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:19:04.165883  441009 out.go:203] 
	W1013 22:19:04.168780  441009 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:19:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:19:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:19:04.168805  441009 out.go:285] * 
	* 
	W1013 22:19:04.175569  441009 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:19:04.178532  441009 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-rhjv9" [5c82453c-97f3-4ad2-9f44-698b5e573907] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004147076s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (252.947851ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:16:39.208604  438581 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:16:39.209363  438581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:39.209380  438581 out.go:374] Setting ErrFile to fd 2...
	I1013 22:16:39.209392  438581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:39.209662  438581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:16:39.209958  438581 mustload.go:65] Loading cluster: addons-801288
	I1013 22:16:39.210311  438581 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:39.210330  438581 addons.go:606] checking whether the cluster is paused
	I1013 22:16:39.210439  438581 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:39.210461  438581 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:16:39.210968  438581 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:16:39.230109  438581 ssh_runner.go:195] Run: systemctl --version
	I1013 22:16:39.230165  438581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:16:39.247501  438581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:16:39.349645  438581 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:16:39.349789  438581 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:16:39.378409  438581 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:16:39.378438  438581 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:16:39.378445  438581 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:16:39.378449  438581 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:16:39.378453  438581 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:16:39.378457  438581 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:16:39.378461  438581 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:16:39.378465  438581 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:16:39.378468  438581 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:16:39.378477  438581 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:16:39.378486  438581 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:16:39.378490  438581 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:16:39.378497  438581 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:16:39.378500  438581 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:16:39.378504  438581 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:16:39.378512  438581 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:16:39.378519  438581 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:16:39.378524  438581 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:16:39.378527  438581 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:16:39.378530  438581 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:16:39.378535  438581 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:16:39.378538  438581 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:16:39.378541  438581 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:16:39.378545  438581 cri.go:89] found id: ""
	I1013 22:16:39.378596  438581 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:16:39.393275  438581 out.go:203] 
	W1013 22:16:39.396229  438581 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:16:39.396254  438581 out.go:285] * 
	* 
	W1013 22:16:39.402714  438581 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:16:39.405768  438581 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.96227ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003581323s
addons_test.go:463: (dbg) Run:  kubectl --context addons-801288 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (266.087709ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:16:33.938677  438444 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:16:33.939344  438444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:33.939388  438444 out.go:374] Setting ErrFile to fd 2...
	I1013 22:16:33.939408  438444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:33.939723  438444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:16:33.940095  438444 mustload.go:65] Loading cluster: addons-801288
	I1013 22:16:33.940514  438444 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:33.940553  438444 addons.go:606] checking whether the cluster is paused
	I1013 22:16:33.940697  438444 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:33.940735  438444 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:16:33.941239  438444 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:16:33.958453  438444 ssh_runner.go:195] Run: systemctl --version
	I1013 22:16:33.958606  438444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:16:33.981875  438444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:16:34.085953  438444 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:16:34.086049  438444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:16:34.115965  438444 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:16:34.115988  438444 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:16:34.115993  438444 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:16:34.115998  438444 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:16:34.116001  438444 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:16:34.116005  438444 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:16:34.116008  438444 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:16:34.116011  438444 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:16:34.116015  438444 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:16:34.116020  438444 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:16:34.116024  438444 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:16:34.116028  438444 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:16:34.116031  438444 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:16:34.116034  438444 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:16:34.116037  438444 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:16:34.116042  438444 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:16:34.116049  438444 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:16:34.116052  438444 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:16:34.116055  438444 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:16:34.116058  438444 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:16:34.116062  438444 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:16:34.116066  438444 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:16:34.116069  438444 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:16:34.116076  438444 cri.go:89] found id: ""
	I1013 22:16:34.116126  438444 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:16:34.132827  438444 out.go:203] 
	W1013 22:16:34.137554  438444 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:16:34.137583  438444 out.go:285] * 
	* 
	W1013 22:16:34.144212  438444 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:16:34.147691  438444 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1013 22:16:12.789317  430652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1013 22:16:12.794526  430652 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1013 22:16:12.794557  430652 kapi.go:107] duration metric: took 5.256078ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.268237ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-801288 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-801288 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [53fa32de-73fb-4e4f-b161-c17208f198fe] Pending
helpers_test.go:352: "task-pv-pod" [53fa32de-73fb-4e4f-b161-c17208f198fe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [53fa32de-73fb-4e4f-b161-c17208f198fe] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003390671s
addons_test.go:572: (dbg) Run:  kubectl --context addons-801288 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-801288 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-801288 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-801288 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-801288 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-801288 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-801288 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [7fc9997f-eb17-46a7-9996-20ab82fc11f7] Pending
helpers_test.go:352: "task-pv-pod-restore" [7fc9997f-eb17-46a7-9996-20ab82fc11f7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [7fc9997f-eb17-46a7-9996-20ab82fc11f7] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.00348433s
addons_test.go:614: (dbg) Run:  kubectl --context addons-801288 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-801288 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-801288 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (279.507625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:16:50.770143  439176 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:16:50.770910  439176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:50.770928  439176 out.go:374] Setting ErrFile to fd 2...
	I1013 22:16:50.770936  439176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:50.771246  439176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:16:50.771564  439176 mustload.go:65] Loading cluster: addons-801288
	I1013 22:16:50.771940  439176 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:50.771951  439176 addons.go:606] checking whether the cluster is paused
	I1013 22:16:50.772052  439176 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:50.772067  439176 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:16:50.772521  439176 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:16:50.791238  439176 ssh_runner.go:195] Run: systemctl --version
	I1013 22:16:50.791303  439176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:16:50.811769  439176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:16:50.913556  439176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:16:50.913637  439176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:16:50.942546  439176 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:16:50.942576  439176 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:16:50.942581  439176 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:16:50.942585  439176 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:16:50.942589  439176 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:16:50.942592  439176 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:16:50.942595  439176 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:16:50.942598  439176 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:16:50.942602  439176 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:16:50.942615  439176 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:16:50.942623  439176 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:16:50.942627  439176 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:16:50.942630  439176 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:16:50.942633  439176 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:16:50.942636  439176 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:16:50.942644  439176 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:16:50.942650  439176 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:16:50.942655  439176 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:16:50.942658  439176 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:16:50.942661  439176 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:16:50.942667  439176 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:16:50.942671  439176 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:16:50.942674  439176 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:16:50.942677  439176 cri.go:89] found id: ""
	I1013 22:16:50.942735  439176 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:16:50.957462  439176 out.go:203] 
	W1013 22:16:50.960262  439176 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:16:50.960286  439176 out.go:285] * 
	* 
	W1013 22:16:50.966785  439176 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:16:50.969612  439176 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (292.881381ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:16:51.044962  439219 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:16:51.045756  439219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:51.045801  439219 out.go:374] Setting ErrFile to fd 2...
	I1013 22:16:51.045824  439219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:51.046105  439219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:16:51.046429  439219 mustload.go:65] Loading cluster: addons-801288
	I1013 22:16:51.046815  439219 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:51.046848  439219 addons.go:606] checking whether the cluster is paused
	I1013 22:16:51.046971  439219 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:51.047003  439219 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:16:51.047512  439219 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:16:51.073489  439219 ssh_runner.go:195] Run: systemctl --version
	I1013 22:16:51.073542  439219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:16:51.092634  439219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:16:51.201679  439219 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:16:51.201778  439219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:16:51.230969  439219 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:16:51.230997  439219 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:16:51.231003  439219 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:16:51.231007  439219 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:16:51.231010  439219 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:16:51.231014  439219 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:16:51.231017  439219 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:16:51.231020  439219 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:16:51.231023  439219 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:16:51.231030  439219 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:16:51.231034  439219 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:16:51.231037  439219 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:16:51.231040  439219 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:16:51.231044  439219 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:16:51.231047  439219 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:16:51.231052  439219 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:16:51.231060  439219 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:16:51.231064  439219 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:16:51.231067  439219 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:16:51.231070  439219 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:16:51.231075  439219 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:16:51.231119  439219 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:16:51.231123  439219 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:16:51.231127  439219 cri.go:89] found id: ""
	I1013 22:16:51.231178  439219 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:16:51.248720  439219 out.go:203] 
	W1013 22:16:51.251599  439219 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:16:51.251619  439219 out.go:285] * 
	* 
	W1013 22:16:51.258051  439219 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:16:51.261074  439219 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (38.48s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-801288 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-801288 --alsologtostderr -v=1: exit status 11 (287.783647ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:16:09.678151  437488 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:16:09.679339  437488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:09.679391  437488 out.go:374] Setting ErrFile to fd 2...
	I1013 22:16:09.679412  437488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:09.679714  437488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:16:09.680087  437488 mustload.go:65] Loading cluster: addons-801288
	I1013 22:16:09.680497  437488 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:09.680537  437488 addons.go:606] checking whether the cluster is paused
	I1013 22:16:09.680699  437488 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:09.680740  437488 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:16:09.681226  437488 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:16:09.698484  437488 ssh_runner.go:195] Run: systemctl --version
	I1013 22:16:09.698542  437488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:16:09.716385  437488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:16:09.822340  437488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:16:09.822468  437488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:16:09.858172  437488 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:16:09.858196  437488 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:16:09.858201  437488 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:16:09.858205  437488 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:16:09.858208  437488 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:16:09.858212  437488 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:16:09.858226  437488 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:16:09.858230  437488 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:16:09.858233  437488 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:16:09.858244  437488 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:16:09.858247  437488 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:16:09.858250  437488 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:16:09.858253  437488 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:16:09.858257  437488 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:16:09.858261  437488 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:16:09.858267  437488 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:16:09.858275  437488 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:16:09.858279  437488 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:16:09.858282  437488 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:16:09.858285  437488 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:16:09.858290  437488 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:16:09.858293  437488 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:16:09.858296  437488 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:16:09.858299  437488 cri.go:89] found id: ""
	I1013 22:16:09.858352  437488 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:16:09.873269  437488 out.go:203] 
	W1013 22:16:09.876150  437488 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:16:09.876176  437488 out.go:285] * 
	* 
	W1013 22:16:09.882696  437488 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:16:09.885640  437488 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-801288 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-801288
helpers_test.go:243: (dbg) docker inspect addons-801288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077",
	        "Created": "2025-10-13T22:13:31.694503561Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 431817,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:13:31.755415379Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077/hostname",
	        "HostsPath": "/var/lib/docker/containers/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077/hosts",
	        "LogPath": "/var/lib/docker/containers/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077-json.log",
	        "Name": "/addons-801288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-801288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-801288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077",
	                "LowerDir": "/var/lib/docker/overlay2/a3e5e9350931ffec57d3c91312f59216677efcc103b3834e3541703e2a1a9651-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3e5e9350931ffec57d3c91312f59216677efcc103b3834e3541703e2a1a9651/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3e5e9350931ffec57d3c91312f59216677efcc103b3834e3541703e2a1a9651/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3e5e9350931ffec57d3c91312f59216677efcc103b3834e3541703e2a1a9651/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-801288",
	                "Source": "/var/lib/docker/volumes/addons-801288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-801288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-801288",
	                "name.minikube.sigs.k8s.io": "addons-801288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fcafabd3bb1b99d7e6223456dd38f856f9c25104d77ab365da1a11d226938ae0",
	            "SandboxKey": "/var/run/docker/netns/fcafabd3bb1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-801288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:ec:36:65:c8:89",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c74e1daa794b08aa86481b82dea805b06eed83f0512c353bf34e0ad53c7b7e7a",
	                    "EndpointID": "1ce71dc73c77af431e1e902f4e14f841628e6cbfc89c7186d230479ed13f0a4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-801288",
	                        "bcc7adeb9dda"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-801288 -n addons-801288
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-801288 logs -n 25: (1.486367879s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-503320 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-503320   │ jenkins │ v1.37.0 │ 13 Oct 25 22:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Oct 25 22:12 UTC │ 13 Oct 25 22:12 UTC │
	│ delete  │ -p download-only-503320                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-503320   │ jenkins │ v1.37.0 │ 13 Oct 25 22:12 UTC │ 13 Oct 25 22:12 UTC │
	│ start   │ -o=json --download-only -p download-only-648593 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-648593   │ jenkins │ v1.37.0 │ 13 Oct 25 22:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │ 13 Oct 25 22:13 UTC │
	│ delete  │ -p download-only-648593                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-648593   │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │ 13 Oct 25 22:13 UTC │
	│ delete  │ -p download-only-503320                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-503320   │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │ 13 Oct 25 22:13 UTC │
	│ delete  │ -p download-only-648593                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-648593   │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │ 13 Oct 25 22:13 UTC │
	│ start   │ --download-only -p download-docker-659560 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-659560 │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │                     │
	│ delete  │ -p download-docker-659560                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-659560 │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │ 13 Oct 25 22:13 UTC │
	│ start   │ --download-only -p binary-mirror-193732 --alsologtostderr --binary-mirror http://127.0.0.1:45831 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-193732   │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │                     │
	│ delete  │ -p binary-mirror-193732                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-193732   │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │ 13 Oct 25 22:13 UTC │
	│ addons  │ disable dashboard -p addons-801288                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │                     │
	│ addons  │ enable dashboard -p addons-801288                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │                     │
	│ start   │ -p addons-801288 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:13 UTC │ 13 Oct 25 22:15 UTC │
	│ addons  │ addons-801288 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:15 UTC │                     │
	│ addons  │ addons-801288 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-801288 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-801288          │ jenkins │ v1.37.0 │ 13 Oct 25 22:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:13:06
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:13:06.088900  431413 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:13:06.089071  431413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:13:06.089083  431413 out.go:374] Setting ErrFile to fd 2...
	I1013 22:13:06.089088  431413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:13:06.089372  431413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:13:06.089881  431413 out.go:368] Setting JSON to false
	I1013 22:13:06.090746  431413 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":6922,"bootTime":1760386664,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 22:13:06.090827  431413 start.go:141] virtualization:  
	I1013 22:13:06.094351  431413 out.go:179] * [addons-801288] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:13:06.097384  431413 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:13:06.097492  431413 notify.go:220] Checking for updates...
	I1013 22:13:06.103402  431413 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:13:06.106347  431413 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 22:13:06.109243  431413 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 22:13:06.112304  431413 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:13:06.115194  431413 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:13:06.118366  431413 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:13:06.148690  431413 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:13:06.148808  431413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:13:06.213455  431413 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-13 22:13:06.203780917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:13:06.213560  431413 docker.go:318] overlay module found
	I1013 22:13:06.216689  431413 out.go:179] * Using the docker driver based on user configuration
	I1013 22:13:06.219631  431413 start.go:305] selected driver: docker
	I1013 22:13:06.219661  431413 start.go:925] validating driver "docker" against <nil>
	I1013 22:13:06.219675  431413 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:13:06.220474  431413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:13:06.280769  431413 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-13 22:13:06.271515563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:13:06.280931  431413 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:13:06.281159  431413 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:13:06.284106  431413 out.go:179] * Using Docker driver with root privileges
	I1013 22:13:06.287033  431413 cni.go:84] Creating CNI manager for ""
	I1013 22:13:06.287172  431413 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:13:06.287195  431413 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:13:06.287277  431413 start.go:349] cluster config:
	{Name:addons-801288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-801288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1013 22:13:06.292174  431413 out.go:179] * Starting "addons-801288" primary control-plane node in "addons-801288" cluster
	I1013 22:13:06.295063  431413 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:13:06.298225  431413 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:13:06.301061  431413 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:13:06.301129  431413 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:13:06.301143  431413 cache.go:58] Caching tarball of preloaded images
	I1013 22:13:06.301145  431413 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:13:06.301307  431413 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:13:06.301321  431413 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:13:06.301655  431413 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/config.json ...
	I1013 22:13:06.301676  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/config.json: {Name:mk189791b193351cde1c6fb4f810c4fe55afe717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:06.317110  431413 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1013 22:13:06.317250  431413 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1013 22:13:06.317275  431413 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory, skipping pull
	I1013 22:13:06.317280  431413 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in cache, skipping pull
	I1013 22:13:06.317292  431413 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 as a tarball
	I1013 22:13:06.317298  431413 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from local cache
	I1013 22:13:24.424029  431413 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from cached tarball
	I1013 22:13:24.424076  431413 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:13:24.424106  431413 start.go:360] acquireMachinesLock for addons-801288: {Name:mk70e26ec42122cf271e40434c2fec37d8cdfa21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:13:24.424240  431413 start.go:364] duration metric: took 111.407µs to acquireMachinesLock for "addons-801288"
	I1013 22:13:24.424271  431413 start.go:93] Provisioning new machine with config: &{Name:addons-801288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-801288 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:13:24.424348  431413 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:13:24.427723  431413 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1013 22:13:24.427978  431413 start.go:159] libmachine.API.Create for "addons-801288" (driver="docker")
	I1013 22:13:24.428024  431413 client.go:168] LocalClient.Create starting
	I1013 22:13:24.428152  431413 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem
	I1013 22:13:25.045390  431413 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem
	I1013 22:13:26.072555  431413 cli_runner.go:164] Run: docker network inspect addons-801288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:13:26.089602  431413 cli_runner.go:211] docker network inspect addons-801288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:13:26.089691  431413 network_create.go:284] running [docker network inspect addons-801288] to gather additional debugging logs...
	I1013 22:13:26.089715  431413 cli_runner.go:164] Run: docker network inspect addons-801288
	W1013 22:13:26.106002  431413 cli_runner.go:211] docker network inspect addons-801288 returned with exit code 1
	I1013 22:13:26.106042  431413 network_create.go:287] error running [docker network inspect addons-801288]: docker network inspect addons-801288: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-801288 not found
	I1013 22:13:26.106059  431413 network_create.go:289] output of [docker network inspect addons-801288]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-801288 not found
	
	** /stderr **
	I1013 22:13:26.106160  431413 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:13:26.122981  431413 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ae8630}
	I1013 22:13:26.123026  431413 network_create.go:124] attempt to create docker network addons-801288 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1013 22:13:26.123107  431413 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-801288 addons-801288
	I1013 22:13:26.182969  431413 network_create.go:108] docker network addons-801288 192.168.49.0/24 created
	I1013 22:13:26.183002  431413 kic.go:121] calculated static IP "192.168.49.2" for the "addons-801288" container
	I1013 22:13:26.183075  431413 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:13:26.200082  431413 cli_runner.go:164] Run: docker volume create addons-801288 --label name.minikube.sigs.k8s.io=addons-801288 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:13:26.217431  431413 oci.go:103] Successfully created a docker volume addons-801288
	I1013 22:13:26.217517  431413 cli_runner.go:164] Run: docker run --rm --name addons-801288-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-801288 --entrypoint /usr/bin/test -v addons-801288:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 22:13:27.172435  431413 oci.go:107] Successfully prepared a docker volume addons-801288
	I1013 22:13:27.172527  431413 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:13:27.172562  431413 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:13:27.172687  431413 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-801288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:13:31.603904  431413 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-801288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.431152129s)
	I1013 22:13:31.603937  431413 kic.go:203] duration metric: took 4.431373908s to extract preloaded images to volume ...
	W1013 22:13:31.604083  431413 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:13:31.604197  431413 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:13:31.677573  431413 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-801288 --name addons-801288 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-801288 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-801288 --network addons-801288 --ip 192.168.49.2 --volume addons-801288:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 22:13:31.986588  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Running}}
	I1013 22:13:32.008313  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:13:32.033375  431413 cli_runner.go:164] Run: docker exec addons-801288 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:13:32.083508  431413 oci.go:144] the created container "addons-801288" has a running status.
	I1013 22:13:32.083536  431413 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa...
	I1013 22:13:32.586320  431413 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:13:32.613507  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:13:32.631000  431413 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:13:32.631021  431413 kic_runner.go:114] Args: [docker exec --privileged addons-801288 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:13:32.677488  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:13:32.695494  431413 machine.go:93] provisionDockerMachine start ...
	I1013 22:13:32.695606  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:32.711943  431413 main.go:141] libmachine: Using SSH client type: native
	I1013 22:13:32.712268  431413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1013 22:13:32.712288  431413 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:13:32.712888  431413 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34900->127.0.0.1:33163: read: connection reset by peer
	I1013 22:13:35.862977  431413 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-801288
	
	I1013 22:13:35.863000  431413 ubuntu.go:182] provisioning hostname "addons-801288"
	I1013 22:13:35.863112  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:35.880632  431413 main.go:141] libmachine: Using SSH client type: native
	I1013 22:13:35.880946  431413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1013 22:13:35.880962  431413 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-801288 && echo "addons-801288" | sudo tee /etc/hostname
	I1013 22:13:36.037547  431413 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-801288
	
	I1013 22:13:36.037626  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:36.057039  431413 main.go:141] libmachine: Using SSH client type: native
	I1013 22:13:36.057361  431413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1013 22:13:36.057377  431413 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-801288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-801288/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-801288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:13:36.207250  431413 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:13:36.207275  431413 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 22:13:36.207308  431413 ubuntu.go:190] setting up certificates
	I1013 22:13:36.207318  431413 provision.go:84] configureAuth start
	I1013 22:13:36.207384  431413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-801288
	I1013 22:13:36.223837  431413 provision.go:143] copyHostCerts
	I1013 22:13:36.223952  431413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 22:13:36.224099  431413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 22:13:36.224182  431413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 22:13:36.224285  431413 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.addons-801288 san=[127.0.0.1 192.168.49.2 addons-801288 localhost minikube]
	I1013 22:13:36.476699  431413 provision.go:177] copyRemoteCerts
	I1013 22:13:36.476766  431413 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:13:36.476812  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:36.494834  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:13:36.599015  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:13:36.616522  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 22:13:36.633758  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:13:36.650845  431413 provision.go:87] duration metric: took 443.512331ms to configureAuth
	I1013 22:13:36.650872  431413 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:13:36.651054  431413 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:13:36.651263  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:36.668276  431413 main.go:141] libmachine: Using SSH client type: native
	I1013 22:13:36.668619  431413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1013 22:13:36.668641  431413 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:13:36.919892  431413 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:13:36.919919  431413 machine.go:96] duration metric: took 4.224400784s to provisionDockerMachine
	I1013 22:13:36.919929  431413 client.go:171] duration metric: took 12.491893959s to LocalClient.Create
	I1013 22:13:36.919944  431413 start.go:167] duration metric: took 12.491968147s to libmachine.API.Create "addons-801288"
	I1013 22:13:36.919950  431413 start.go:293] postStartSetup for "addons-801288" (driver="docker")
	I1013 22:13:36.919960  431413 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:13:36.920032  431413 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:13:36.920088  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:36.938055  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:13:37.040439  431413 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:13:37.043932  431413 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:13:37.043964  431413 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:13:37.043975  431413 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 22:13:37.044042  431413 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 22:13:37.044070  431413 start.go:296] duration metric: took 124.114244ms for postStartSetup
	I1013 22:13:37.044381  431413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-801288
	I1013 22:13:37.061209  431413 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/config.json ...
	I1013 22:13:37.061507  431413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:13:37.061551  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:37.079403  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:13:37.180068  431413 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:13:37.184831  431413 start.go:128] duration metric: took 12.760467193s to createHost
	I1013 22:13:37.184853  431413 start.go:83] releasing machines lock for "addons-801288", held for 12.760598562s
	I1013 22:13:37.184933  431413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-801288
	I1013 22:13:37.201983  431413 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:13:37.202057  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:37.202318  431413 ssh_runner.go:195] Run: cat /version.json
	I1013 22:13:37.202362  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:13:37.222246  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:13:37.224908  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:13:37.409944  431413 ssh_runner.go:195] Run: systemctl --version
	I1013 22:13:37.416175  431413 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:13:37.451407  431413 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:13:37.455593  431413 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:13:37.455674  431413 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:13:37.483347  431413 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:13:37.483375  431413 start.go:495] detecting cgroup driver to use...
	I1013 22:13:37.483406  431413 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:13:37.483466  431413 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:13:37.500792  431413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:13:37.514097  431413 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:13:37.514162  431413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:13:37.532346  431413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:13:37.551438  431413 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:13:37.670053  431413 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:13:37.795096  431413 docker.go:234] disabling docker service ...
	I1013 22:13:37.795182  431413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:13:37.815097  431413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:13:37.828267  431413 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:13:37.941936  431413 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:13:38.059823  431413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:13:38.076855  431413 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:13:38.093602  431413 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:13:38.093679  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.104040  431413 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:13:38.104123  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.113398  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.122749  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.132322  431413 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:13:38.140934  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.150271  431413 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.164266  431413 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:13:38.173838  431413 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:13:38.181956  431413 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:13:38.190631  431413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:13:38.300465  431413 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:13:38.435294  431413 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:13:38.435421  431413 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:13:38.439184  431413 start.go:563] Will wait 60s for crictl version
	I1013 22:13:38.439303  431413 ssh_runner.go:195] Run: which crictl
	I1013 22:13:38.442720  431413 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:13:38.468623  431413 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:13:38.468772  431413 ssh_runner.go:195] Run: crio --version
	I1013 22:13:38.497615  431413 ssh_runner.go:195] Run: crio --version
	I1013 22:13:38.529615  431413 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:13:38.532444  431413 cli_runner.go:164] Run: docker network inspect addons-801288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:13:38.550631  431413 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1013 22:13:38.554540  431413 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:13:38.564330  431413 kubeadm.go:883] updating cluster {Name:addons-801288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-801288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:13:38.564459  431413 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:13:38.564520  431413 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:13:38.597733  431413 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:13:38.597760  431413 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:13:38.597820  431413 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:13:38.627389  431413 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:13:38.627414  431413 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:13:38.627422  431413 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1013 22:13:38.627516  431413 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-801288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-801288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:13:38.627619  431413 ssh_runner.go:195] Run: crio config
	I1013 22:13:38.680264  431413 cni.go:84] Creating CNI manager for ""
	I1013 22:13:38.680287  431413 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:13:38.680308  431413 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:13:38.680333  431413 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-801288 NodeName:addons-801288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:13:38.680478  431413 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-801288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:13:38.680552  431413 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:13:38.688464  431413 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:13:38.688579  431413 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:13:38.696340  431413 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1013 22:13:38.709579  431413 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:13:38.722369  431413 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1013 22:13:38.735041  431413 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:13:38.738653  431413 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:13:38.748252  431413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:13:38.869690  431413 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:13:38.886202  431413 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288 for IP: 192.168.49.2
	I1013 22:13:38.886268  431413 certs.go:195] generating shared ca certs ...
	I1013 22:13:38.886302  431413 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:38.886464  431413 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 22:13:39.330197  431413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt ...
	I1013 22:13:39.330227  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt: {Name:mk5023dbb88ff3c4b9af32c9937eb6ec5e270041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:39.330462  431413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key ...
	I1013 22:13:39.330478  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key: {Name:mkffd4b77e79837420b00658adbd480528e197d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:39.331313  431413 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 22:13:39.872952  431413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt ...
	I1013 22:13:39.872985  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt: {Name:mk6273f1afcfd01cccd9524e5147c2e91200566f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:39.873734  431413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key ...
	I1013 22:13:39.873751  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key: {Name:mkc7707ae989847021566a30f7a9177a0d38623b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:39.873839  431413 certs.go:257] generating profile certs ...
	I1013 22:13:39.873895  431413 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.key
	I1013 22:13:39.873913  431413 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt with IP's: []
	I1013 22:13:40.398479  431413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt ...
	I1013 22:13:40.398511  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: {Name:mk272d69de7ae58c64aa9603271795d35c92756a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.398708  431413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.key ...
	I1013 22:13:40.398735  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.key: {Name:mk98b60ef0d9536c388401822babaea3b25dad40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.398820  431413 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key.da37875c
	I1013 22:13:40.398846  431413 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt.da37875c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1013 22:13:40.495000  431413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt.da37875c ...
	I1013 22:13:40.495030  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt.da37875c: {Name:mkdea7fedef3ede0007aaabbf7f10d7be649e6a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.495220  431413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key.da37875c ...
	I1013 22:13:40.495245  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key.da37875c: {Name:mk1a8142b5e74627bc756f1d4c3b23f803629997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.495330  431413 certs.go:382] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt.da37875c -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt
	I1013 22:13:40.495413  431413 certs.go:386] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key.da37875c -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key
	I1013 22:13:40.495470  431413 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.key
	I1013 22:13:40.495491  431413 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.crt with IP's: []
	I1013 22:13:40.922203  431413 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.crt ...
	I1013 22:13:40.922234  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.crt: {Name:mkc135bb90d68af9be1c55c33d73e6d39c3043ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.922421  431413 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.key ...
	I1013 22:13:40.922436  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.key: {Name:mk63d144de190b74c106e99fe8c2cd486bb8d634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:13:40.922629  431413 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:13:40.922669  431413 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:13:40.922699  431413 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:13:40.922731  431413 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 22:13:40.923311  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:13:40.941572  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:13:40.959352  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:13:40.976692  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:13:40.994938  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 22:13:41.013794  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:13:41.031960  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:13:41.049815  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:13:41.067602  431413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:13:41.085275  431413 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:13:41.098599  431413 ssh_runner.go:195] Run: openssl version
	I1013 22:13:41.105121  431413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:13:41.113241  431413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:13:41.116837  431413 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:13:41.116933  431413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:13:41.157708  431413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:13:41.166313  431413 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:13:41.170173  431413 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:13:41.170241  431413 kubeadm.go:400] StartCluster: {Name:addons-801288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-801288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:13:41.170335  431413 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:13:41.170394  431413 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:13:41.200810  431413 cri.go:89] found id: ""
	I1013 22:13:41.200934  431413 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:13:41.208834  431413 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:13:41.216607  431413 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:13:41.216677  431413 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:13:41.224276  431413 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:13:41.224297  431413 kubeadm.go:157] found existing configuration files:
	
	I1013 22:13:41.224346  431413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:13:41.233709  431413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:13:41.233778  431413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:13:41.242452  431413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:13:41.253901  431413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:13:41.253968  431413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:13:41.262478  431413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:13:41.272862  431413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:13:41.272932  431413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:13:41.281392  431413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:13:41.289172  431413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:13:41.289241  431413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:13:41.296817  431413 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:13:41.340507  431413 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:13:41.340569  431413 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:13:41.364320  431413 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:13:41.364400  431413 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:13:41.364440  431413 kubeadm.go:318] OS: Linux
	I1013 22:13:41.364493  431413 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:13:41.364554  431413 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:13:41.364607  431413 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:13:41.364663  431413 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:13:41.364719  431413 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:13:41.364774  431413 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:13:41.364826  431413 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:13:41.364880  431413 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:13:41.364933  431413 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:13:41.432455  431413 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:13:41.432577  431413 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:13:41.432678  431413 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:13:41.440560  431413 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:13:41.444095  431413 out.go:252]   - Generating certificates and keys ...
	I1013 22:13:41.444199  431413 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:13:41.444272  431413 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:13:41.864406  431413 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:13:41.963483  431413 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:13:42.734285  431413 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:13:43.515894  431413 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:13:43.688648  431413 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:13:43.688824  431413 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-801288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 22:13:44.248201  431413 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:13:44.248823  431413 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-801288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 22:13:44.400408  431413 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:13:45.120473  431413 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:13:45.380956  431413 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:13:45.381288  431413 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:13:45.806143  431413 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:13:46.717273  431413 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:13:46.848950  431413 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:13:47.527491  431413 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:13:47.743049  431413 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:13:47.743712  431413 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:13:47.746469  431413 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:13:47.749892  431413 out.go:252]   - Booting up control plane ...
	I1013 22:13:47.750008  431413 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:13:47.750091  431413 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:13:47.750171  431413 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:13:47.765510  431413 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:13:47.765825  431413 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:13:47.773892  431413 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:13:47.774259  431413 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:13:47.774325  431413 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:13:47.904511  431413 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:13:47.904637  431413 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:13:49.405182  431413 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500953433s
	I1013 22:13:49.408782  431413 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:13:49.408885  431413 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1013 22:13:49.409172  431413 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:13:49.409267  431413 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:13:50.997667  431413 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.588370048s
	I1013 22:13:53.681700  431413 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.272889071s
	I1013 22:13:55.410830  431413 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001952217s
	I1013 22:13:55.430688  431413 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:13:55.445522  431413 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:13:55.457711  431413 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:13:55.457928  431413 kubeadm.go:318] [mark-control-plane] Marking the node addons-801288 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:13:55.470684  431413 kubeadm.go:318] [bootstrap-token] Using token: iuujyr.pmmj8z57kgb438qe
	I1013 22:13:55.473894  431413 out.go:252]   - Configuring RBAC rules ...
	I1013 22:13:55.474029  431413 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:13:55.480866  431413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:13:55.488903  431413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:13:55.492932  431413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:13:55.503248  431413 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:13:55.507373  431413 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:13:55.820323  431413 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:13:56.255780  431413 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:13:56.817588  431413 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:13:56.818918  431413 kubeadm.go:318] 
	I1013 22:13:56.818999  431413 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:13:56.819009  431413 kubeadm.go:318] 
	I1013 22:13:56.819108  431413 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:13:56.819119  431413 kubeadm.go:318] 
	I1013 22:13:56.819146  431413 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:13:56.819212  431413 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:13:56.819268  431413 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:13:56.819276  431413 kubeadm.go:318] 
	I1013 22:13:56.819333  431413 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:13:56.819342  431413 kubeadm.go:318] 
	I1013 22:13:56.819393  431413 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:13:56.819402  431413 kubeadm.go:318] 
	I1013 22:13:56.819457  431413 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:13:56.819539  431413 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:13:56.819621  431413 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:13:56.819630  431413 kubeadm.go:318] 
	I1013 22:13:56.819719  431413 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:13:56.819804  431413 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:13:56.819811  431413 kubeadm.go:318] 
	I1013 22:13:56.819906  431413 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token iuujyr.pmmj8z57kgb438qe \
	I1013 22:13:56.820015  431413 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 \
	I1013 22:13:56.820036  431413 kubeadm.go:318] 	--control-plane 
	I1013 22:13:56.820041  431413 kubeadm.go:318] 
	I1013 22:13:56.820129  431413 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:13:56.820134  431413 kubeadm.go:318] 
	I1013 22:13:56.820219  431413 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token iuujyr.pmmj8z57kgb438qe \
	I1013 22:13:56.820326  431413 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 
	I1013 22:13:56.824275  431413 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:13:56.824517  431413 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:13:56.824629  431413 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:13:56.824649  431413 cni.go:84] Creating CNI manager for ""
	I1013 22:13:56.824657  431413 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:13:56.827761  431413 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 22:13:56.830554  431413 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:13:56.834586  431413 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:13:56.834607  431413 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:13:56.848063  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:13:57.129628  431413 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:13:57.129761  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:57.129846  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-801288 minikube.k8s.io/updated_at=2025_10_13T22_13_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=addons-801288 minikube.k8s.io/primary=true
	I1013 22:13:57.272756  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:57.272637  431413 ops.go:34] apiserver oom_adj: -16
	I1013 22:13:57.773379  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:58.272794  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:58.773542  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:59.273404  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:13:59.772859  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:14:00.273656  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:14:00.773504  431413 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:14:00.862097  431413 kubeadm.go:1113] duration metric: took 3.732382676s to wait for elevateKubeSystemPrivileges
	I1013 22:14:00.862128  431413 kubeadm.go:402] duration metric: took 19.691910251s to StartCluster
	I1013 22:14:00.862144  431413 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:14:00.862258  431413 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 22:14:00.862687  431413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:14:00.862878  431413 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:14:00.863008  431413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:14:00.863292  431413 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:14:00.863400  431413 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 22:14:00.863478  431413 addons.go:69] Setting yakd=true in profile "addons-801288"
	I1013 22:14:00.863490  431413 addons.go:238] Setting addon yakd=true in "addons-801288"
	I1013 22:14:00.863512  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.864132  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.864568  431413 addons.go:69] Setting inspektor-gadget=true in profile "addons-801288"
	I1013 22:14:00.864584  431413 addons.go:238] Setting addon inspektor-gadget=true in "addons-801288"
	I1013 22:14:00.864607  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.865014  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.867467  431413 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-801288"
	I1013 22:14:00.867565  431413 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-801288"
	I1013 22:14:00.867597  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.868042  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.868643  431413 addons.go:69] Setting metrics-server=true in profile "addons-801288"
	I1013 22:14:00.869066  431413 addons.go:238] Setting addon metrics-server=true in "addons-801288"
	I1013 22:14:00.869116  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.873080  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.868782  431413 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-801288"
	I1013 22:14:00.875877  431413 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-801288"
	I1013 22:14:00.868793  431413 addons.go:69] Setting registry=true in profile "addons-801288"
	I1013 22:14:00.868800  431413 addons.go:69] Setting registry-creds=true in profile "addons-801288"
	I1013 22:14:00.868806  431413 addons.go:69] Setting storage-provisioner=true in profile "addons-801288"
	I1013 22:14:00.868812  431413 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-801288"
	I1013 22:14:00.868817  431413 addons.go:69] Setting volcano=true in profile "addons-801288"
	I1013 22:14:00.868822  431413 addons.go:69] Setting volumesnapshots=true in profile "addons-801288"
	I1013 22:14:00.868830  431413 out.go:179] * Verifying Kubernetes components...
	I1013 22:14:00.868989  431413 addons.go:69] Setting gcp-auth=true in profile "addons-801288"
	I1013 22:14:00.868997  431413 addons.go:69] Setting cloud-spanner=true in profile "addons-801288"
	I1013 22:14:00.869005  431413 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-801288"
	I1013 22:14:00.869011  431413 addons.go:69] Setting default-storageclass=true in profile "addons-801288"
	I1013 22:14:00.869018  431413 addons.go:69] Setting ingress-dns=true in profile "addons-801288"
	I1013 22:14:00.869032  431413 addons.go:69] Setting ingress=true in profile "addons-801288"
	I1013 22:14:00.879188  431413 addons.go:238] Setting addon ingress=true in "addons-801288"
	I1013 22:14:00.879271  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.879838  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.886116  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.886678  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.893929  431413 addons.go:238] Setting addon cloud-spanner=true in "addons-801288"
	I1013 22:14:00.894039  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.894597  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.898863  431413 addons.go:238] Setting addon registry=true in "addons-801288"
	I1013 22:14:00.898967  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.899581  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.903175  431413 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-801288"
	I1013 22:14:00.903227  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.903667  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.920280  431413 addons.go:238] Setting addon registry-creds=true in "addons-801288"
	I1013 22:14:00.920377  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.920870  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.931175  431413 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-801288"
	I1013 22:14:00.931553  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.948928  431413 addons.go:238] Setting addon ingress-dns=true in "addons-801288"
	I1013 22:14:00.948995  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.949482  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.954944  431413 addons.go:238] Setting addon storage-provisioner=true in "addons-801288"
	I1013 22:14:00.955001  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:00.955485  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.975992  431413 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-801288"
	I1013 22:14:00.976333  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:00.999871  431413 addons.go:238] Setting addon volcano=true in "addons-801288"
	I1013 22:14:01.000001  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:01.000511  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:01.014818  431413 addons.go:238] Setting addon volumesnapshots=true in "addons-801288"
	I1013 22:14:01.014877  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:01.015473  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:01.031953  431413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:14:01.032374  431413 mustload.go:65] Loading cluster: addons-801288
	I1013 22:14:01.032587  431413 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:14:01.032825  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:01.063431  431413 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 22:14:01.068958  431413 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 22:14:01.068985  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 22:14:01.069084  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.088392  431413 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 22:14:01.101168  431413 addons.go:238] Setting addon default-storageclass=true in "addons-801288"
	I1013 22:14:01.101212  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:01.101649  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:01.101974  431413 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 22:14:01.107357  431413 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 22:14:01.107678  431413 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-801288"
	I1013 22:14:01.107729  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:01.112041  431413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:14:01.112497  431413 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 22:14:01.112513  431413 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 22:14:01.113330  431413 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 22:14:01.113392  431413 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 22:14:01.113500  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.117050  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	W1013 22:14:01.124680  431413 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1013 22:14:01.124953  431413 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 22:14:01.125805  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:01.130106  431413 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 22:14:01.130128  431413 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 22:14:01.130203  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.155322  431413 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 22:14:01.158323  431413 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 22:14:01.158350  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 22:14:01.158426  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.158763  431413 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1013 22:14:01.161740  431413 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 22:14:01.164679  431413 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 22:14:01.167572  431413 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 22:14:01.167597  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 22:14:01.167665  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.181253  431413 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 22:14:01.184233  431413 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 22:14:01.187049  431413 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 22:14:01.187160  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 22:14:01.187244  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.206761  431413 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 22:14:01.206793  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 22:14:01.207396  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.214181  431413 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 22:14:01.219417  431413 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 22:14:01.219442  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 22:14:01.219512  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.248850  431413 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 22:14:01.251677  431413 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 22:14:01.251709  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 22:14:01.251776  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.259963  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.270241  431413 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:14:01.270261  431413 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:14:01.270318  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.263520  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:01.278079  431413 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:14:01.282277  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 22:14:01.287419  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.296552  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1013 22:14:01.298601  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 22:14:01.298660  431413 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 22:14:01.298772  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.343293  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 22:14:01.343623  431413 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:14:01.343641  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:14:01.343706  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.364299  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 22:14:01.367502  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 22:14:01.370174  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.376281  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.380051  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 22:14:01.384614  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 22:14:01.393626  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 22:14:01.398077  431413 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 22:14:01.403385  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.404673  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.404763  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 22:14:01.406574  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 22:14:01.406691  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.421214  431413 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 22:14:01.424884  431413 out.go:179]   - Using image docker.io/busybox:stable
	I1013 22:14:01.427632  431413 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 22:14:01.427667  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 22:14:01.427763  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:01.473793  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.483149  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.489636  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.507377  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.530305  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.545079  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.547760  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.550463  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	W1013 22:14:01.550974  431413 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1013 22:14:01.551011  431413 retry.go:31] will retry after 178.792854ms: ssh: handshake failed: EOF
	W1013 22:14:01.559187  431413 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1013 22:14:01.559275  431413 retry.go:31] will retry after 249.651777ms: ssh: handshake failed: EOF
	I1013 22:14:01.559454  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:01.561188  431413 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1013 22:14:01.809685  431413 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1013 22:14:01.809720  431413 retry.go:31] will retry after 309.993973ms: ssh: handshake failed: EOF
	I1013 22:14:01.978807  431413 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:01.978883  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 22:14:02.052418  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:02.055866  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 22:14:02.065020  431413 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 22:14:02.065046  431413 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 22:14:02.143888  431413 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 22:14:02.143915  431413 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 22:14:02.180856  431413 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 22:14:02.180880  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 22:14:02.265884  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 22:14:02.274649  431413 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 22:14:02.274676  431413 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 22:14:02.299510  431413 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 22:14:02.299534  431413 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 22:14:02.322054  431413 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 22:14:02.322080  431413 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 22:14:02.343417  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 22:14:02.372768  431413 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 22:14:02.372792  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 22:14:02.378202  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 22:14:02.417106  431413 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 22:14:02.417126  431413 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 22:14:02.461530  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 22:14:02.462433  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:14:02.541316  431413 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 22:14:02.541340  431413 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 22:14:02.544408  431413 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 22:14:02.544434  431413 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 22:14:02.549323  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 22:14:02.552901  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:14:02.569979  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 22:14:02.572748  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 22:14:02.640201  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 22:14:02.640278  431413 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 22:14:02.675890  431413 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 22:14:02.675974  431413 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 22:14:02.784660  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 22:14:02.861741  431413 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 22:14:02.861766  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 22:14:02.886378  431413 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 22:14:02.886401  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 22:14:03.108705  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 22:14:03.129044  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 22:14:03.129074  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 22:14:03.154716  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 22:14:03.324211  431413 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.212122274s)
	I1013 22:14:03.324243  431413 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1013 22:14:03.325208  431413 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.764000832s)
	I1013 22:14:03.325810  431413 node_ready.go:35] waiting up to 6m0s for node "addons-801288" to be "Ready" ...
	I1013 22:14:03.531993  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 22:14:03.532020  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 22:14:03.793919  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 22:14:03.793945  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 22:14:03.829681  431413 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-801288" context rescaled to 1 replicas
	I1013 22:14:04.024611  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 22:14:04.024679  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 22:14:04.214086  431413 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 22:14:04.214158  431413 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 22:14:04.451861  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 22:14:04.451888  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 22:14:04.771377  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 22:14:04.771404  431413 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 22:14:05.034976  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 22:14:05.035005  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	W1013 22:14:05.331073  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:05.392678  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 22:14:05.392702  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 22:14:05.606863  431413 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 22:14:05.606891  431413 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 22:14:05.712748  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 22:14:06.304237  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.25178547s)
	W1013 22:14:06.304273  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:06.304294  431413 retry.go:31] will retry after 315.57066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:06.620538  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1013 22:14:07.341719  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:07.692540  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.636591448s)
	I1013 22:14:07.692576  431413 addons.go:479] Verifying addon ingress=true in "addons-801288"
	I1013 22:14:07.692940  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.42703082s)
	I1013 22:14:07.692992  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.34955039s)
	I1013 22:14:07.693029  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.314804158s)
	I1013 22:14:07.693072  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.231517222s)
	I1013 22:14:07.693107  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.230654659s)
	I1013 22:14:07.693294  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.143945039s)
	I1013 22:14:07.693356  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.140430361s)
	I1013 22:14:07.693394  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.123387093s)
	I1013 22:14:07.693409  431413 addons.go:479] Verifying addon registry=true in "addons-801288"
	I1013 22:14:07.693756  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.120939823s)
	I1013 22:14:07.693940  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.909249121s)
	I1013 22:14:07.693962  431413 addons.go:479] Verifying addon metrics-server=true in "addons-801288"
	I1013 22:14:07.694015  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.585268341s)
	I1013 22:14:07.694160  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.539413835s)
	W1013 22:14:07.694191  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 22:14:07.694209  431413 retry.go:31] will retry after 155.875955ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 22:14:07.697367  431413 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-801288 service yakd-dashboard -n yakd-dashboard
	
	I1013 22:14:07.697457  431413 out.go:179] * Verifying registry addon...
	I1013 22:14:07.697487  431413 out.go:179] * Verifying ingress addon...
	I1013 22:14:07.701207  431413 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 22:14:07.701207  431413 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 22:14:07.716637  431413 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 22:14:07.716664  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:07.716872  431413 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 22:14:07.716889  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:07.722223  431413 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1013 22:14:07.850548  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 22:14:08.125530  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.412720723s)
	I1013 22:14:08.125650  431413 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-801288"
	I1013 22:14:08.128865  431413 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 22:14:08.132587  431413 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 22:14:08.152756  431413 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 22:14:08.152784  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:08.220306  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.599726793s)
	W1013 22:14:08.220395  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:08.220523  431413 retry.go:31] will retry after 223.794412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:08.250317  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:08.250862  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:08.444613  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:08.636421  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:08.737266  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:08.737913  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:08.889660  431413 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 22:14:08.889747  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:08.912729  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:09.096306  431413 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 22:14:09.109755  431413 addons.go:238] Setting addon gcp-auth=true in "addons-801288"
	I1013 22:14:09.109802  431413 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:14:09.110241  431413 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:14:09.136341  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:09.138017  431413 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 22:14:09.138077  431413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:14:09.163693  431413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:14:09.207293  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:09.207544  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:09.438427  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:09.438465  431413 retry.go:31] will retry after 610.373608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:09.442213  431413 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 22:14:09.445105  431413 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 22:14:09.447946  431413 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 22:14:09.447977  431413 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 22:14:09.461315  431413 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 22:14:09.461337  431413 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 22:14:09.474115  431413 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 22:14:09.474138  431413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 22:14:09.487643  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 22:14:09.636060  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:09.706312  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:09.707407  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 22:14:09.831044  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:09.953698  431413 addons.go:479] Verifying addon gcp-auth=true in "addons-801288"
	I1013 22:14:09.956361  431413 out.go:179] * Verifying gcp-auth addon...
	I1013 22:14:09.960028  431413 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 22:14:09.977098  431413 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 22:14:09.977171  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:10.049513  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:10.137782  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:10.240951  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:10.241509  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:10.463983  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:10.635783  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:10.705388  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:10.706322  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 22:14:10.894957  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:10.894990  431413 retry.go:31] will retry after 1.221428298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:10.964011  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:11.136472  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:11.204861  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:11.205002  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:11.464150  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:11.636054  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:11.705303  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:11.705778  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:11.963192  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:12.117354  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:12.135766  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:12.205202  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:12.206082  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:12.329237  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:12.463783  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:12.636571  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:12.704589  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:12.706106  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:12.945151  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:12.945187  431413 retry.go:31] will retry after 1.258306834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:12.962700  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:13.135987  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:13.205401  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:13.206649  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:13.463295  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:13.636510  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:13.705483  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:13.705841  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:13.963374  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:14.136579  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:14.204662  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:14.204769  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:14.207253  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 22:14:14.329929  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:14.463353  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:14.638071  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:14.709209  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:14.710118  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:14.963489  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 22:14:15.098893  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:15.098974  431413 retry.go:31] will retry after 2.430229456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:15.135999  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:15.205821  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:15.205942  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:15.464090  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:15.636189  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:15.704372  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:15.704769  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:15.963525  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:16.135628  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:16.204763  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:16.204927  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:16.463321  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:16.636505  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:16.704707  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:16.704895  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:16.828802  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:16.963946  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:17.136217  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:17.205222  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:17.205498  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:17.462916  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:17.529971  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:17.636322  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:17.706552  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:17.707153  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:17.963750  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:18.136285  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:18.206494  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:18.206606  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:18.364144  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:18.364232  431413 retry.go:31] will retry after 3.557976141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:18.462986  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:18.635928  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:18.705412  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:18.705630  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:18.829328  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:18.963319  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:19.136054  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:19.205564  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:19.205790  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:19.463197  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:19.635854  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:19.704665  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:19.705260  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:19.963041  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:20.136037  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:20.205736  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:20.205896  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:20.463247  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:20.636051  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:20.705355  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:20.705456  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:20.963129  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:21.135944  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:21.205177  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:21.205424  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:21.329158  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:21.463071  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:21.635978  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:21.705826  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:21.706109  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:21.923146  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:21.963546  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:22.136514  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:22.206528  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:22.207409  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:22.464287  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:22.637074  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:22.706724  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:22.706973  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:22.772882  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:22.772919  431413 retry.go:31] will retry after 3.841219822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:22.964041  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:23.136162  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:23.205374  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:23.205633  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:23.329745  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:23.464145  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:23.636290  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:23.705261  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:23.705445  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:23.963464  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:24.135468  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:24.204771  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:24.205004  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:24.463850  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:24.635722  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:24.705224  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:24.705307  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:24.963050  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:25.136280  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:25.205258  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:25.205347  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:25.463993  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:25.635766  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:25.705105  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:25.705549  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 22:14:25.829489  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:25.963315  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:26.136325  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:26.204672  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:26.204804  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:26.463738  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:26.615166  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:26.635998  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:26.706439  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:26.706722  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:26.963174  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:27.136965  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:27.206397  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:27.206870  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:27.416899  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:27.416934  431413 retry.go:31] will retry after 5.688273921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:27.463876  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:27.635919  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:27.705357  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:27.705515  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:27.963610  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:28.135884  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:28.205116  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:28.205503  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:28.329212  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:28.463470  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:28.636443  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:28.704937  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:28.705069  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:28.963886  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:29.135781  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:29.205590  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:29.206775  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:29.463763  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:29.635265  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:29.705714  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:29.705832  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:29.963714  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:30.135899  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:30.205223  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:30.205394  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:30.335647  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:30.464045  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:30.636117  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:30.705702  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:30.706372  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:30.963385  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:31.136055  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:31.205270  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:31.205420  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:31.463427  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:31.635687  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:31.705127  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:31.705365  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:31.963408  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:32.136810  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:32.205539  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:32.205961  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:32.463545  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:32.636354  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:32.704630  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:32.704923  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:32.828645  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:32.964090  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:33.106285  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:33.136340  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:33.205861  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:33.206478  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:33.463326  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:33.637020  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:33.706125  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:33.706734  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:33.958274  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:33.958353  431413 retry.go:31] will retry after 8.699411523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:33.963100  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:34.136063  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:34.205031  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:34.205264  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:34.463560  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:34.636328  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:34.705298  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:34.705448  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:34.829416  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:34.963136  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:35.136558  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:35.204976  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:35.205158  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:35.464398  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:35.636474  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:35.705398  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:35.705532  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:35.963373  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:36.136528  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:36.205120  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:36.205412  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:36.463552  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:36.636394  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:36.705857  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:36.706004  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:36.962880  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:37.136107  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:37.205313  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:37.205551  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:37.329615  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:37.463589  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:37.636543  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:37.704814  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:37.705219  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:37.962908  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:38.136784  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:38.205396  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:38.205569  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:38.463908  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:38.635944  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:38.705350  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:38.705417  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:38.963433  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:39.136497  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:39.205468  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:39.205641  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:39.463546  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:39.635679  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:39.705232  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:39.705400  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:39.829262  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:39.963486  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:40.136819  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:40.205167  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:40.205323  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:40.463247  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:40.635961  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:40.705518  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:40.705622  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:40.963376  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:41.135963  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:41.205341  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:41.205929  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:41.463839  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:41.636126  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:41.705597  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:41.705656  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 22:14:41.829364  431413 node_ready.go:57] node "addons-801288" has "Ready":"False" status (will retry)
	I1013 22:14:41.963054  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:42.137703  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:42.205325  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:42.206055  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:42.462940  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:42.635988  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:42.658181  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:42.706325  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:42.706745  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:42.963930  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:43.161279  431413 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 22:14:43.161343  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:43.266556  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:43.267042  431413 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 22:14:43.267121  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:43.335970  431413 node_ready.go:49] node "addons-801288" is "Ready"
	I1013 22:14:43.336078  431413 node_ready.go:38] duration metric: took 40.010239803s for node "addons-801288" to be "Ready" ...
	I1013 22:14:43.336108  431413 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:14:43.336202  431413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:14:43.537560  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:43.671782  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:43.729006  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:43.729475  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:43.976350  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:44.146106  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:44.205358  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:44.209258  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:44.312358  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.65414049s)
	W1013 22:14:44.312435  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:44.312512  431413 retry.go:31] will retry after 15.4515216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:14:44.312583  431413 api_server.go:72] duration metric: took 43.449682894s to wait for apiserver process to appear ...
	I1013 22:14:44.312605  431413 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:14:44.312654  431413 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1013 22:14:44.321111  431413 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1013 22:14:44.322143  431413 api_server.go:141] control plane version: v1.34.1
	I1013 22:14:44.322164  431413 api_server.go:131] duration metric: took 9.521378ms to wait for apiserver health ...
	I1013 22:14:44.322173  431413 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:14:44.328903  431413 system_pods.go:59] 19 kube-system pods found
	I1013 22:14:44.328983  431413 system_pods.go:61] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:14:44.329009  431413 system_pods.go:61] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:44.329050  431413 system_pods.go:61] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:44.329076  431413 system_pods.go:61] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:44.329101  431413 system_pods.go:61] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:44.329123  431413 system_pods.go:61] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:44.329158  431413 system_pods.go:61] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:44.329185  431413 system_pods.go:61] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:44.329208  431413 system_pods.go:61] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:44.329229  431413 system_pods.go:61] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:44.329261  431413 system_pods.go:61] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:44.329286  431413 system_pods.go:61] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:44.329321  431413 system_pods.go:61] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:44.329344  431413 system_pods.go:61] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:44.329374  431413 system_pods.go:61] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:44.329400  431413 system_pods.go:61] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:44.329423  431413 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.329446  431413 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.329479  431413 system_pods.go:61] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:14:44.329506  431413 system_pods.go:74] duration metric: took 7.326099ms to wait for pod list to return data ...
	I1013 22:14:44.329529  431413 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:14:44.348254  431413 default_sa.go:45] found service account: "default"
	I1013 22:14:44.348330  431413 default_sa.go:55] duration metric: took 18.778999ms for default service account to be created ...
	I1013 22:14:44.348358  431413 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:14:44.427545  431413 system_pods.go:86] 19 kube-system pods found
	I1013 22:14:44.427628  431413 system_pods.go:89] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:14:44.427652  431413 system_pods.go:89] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:44.427677  431413 system_pods.go:89] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:44.427722  431413 system_pods.go:89] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:44.427745  431413 system_pods.go:89] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:44.427765  431413 system_pods.go:89] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:44.427795  431413 system_pods.go:89] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:44.427819  431413 system_pods.go:89] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:44.427902  431413 system_pods.go:89] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:44.427926  431413 system_pods.go:89] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:44.427947  431413 system_pods.go:89] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:44.427968  431413 system_pods.go:89] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:44.428002  431413 system_pods.go:89] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:44.428030  431413 system_pods.go:89] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:44.428050  431413 system_pods.go:89] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:44.428071  431413 system_pods.go:89] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:44.428108  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.428137  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.428159  431413 system_pods.go:89] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:14:44.428191  431413 retry.go:31] will retry after 206.118733ms: missing components: kube-dns
	I1013 22:14:44.526600  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:44.636711  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:44.640817  431413 system_pods.go:86] 19 kube-system pods found
	I1013 22:14:44.640906  431413 system_pods.go:89] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:14:44.640929  431413 system_pods.go:89] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:44.640968  431413 system_pods.go:89] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:44.640995  431413 system_pods.go:89] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:44.641018  431413 system_pods.go:89] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:44.641039  431413 system_pods.go:89] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:44.641071  431413 system_pods.go:89] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:44.641094  431413 system_pods.go:89] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:44.641115  431413 system_pods.go:89] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:44.641134  431413 system_pods.go:89] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:44.641154  431413 system_pods.go:89] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:44.641183  431413 system_pods.go:89] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:44.641216  431413 system_pods.go:89] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:44.641238  431413 system_pods.go:89] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:44.641262  431413 system_pods.go:89] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:44.641293  431413 system_pods.go:89] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:44.641324  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.641349  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.641369  431413 system_pods.go:89] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Running
	I1013 22:14:44.641410  431413 retry.go:31] will retry after 235.031412ms: missing components: kube-dns
	I1013 22:14:44.756030  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:44.756267  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:44.883611  431413 system_pods.go:86] 19 kube-system pods found
	I1013 22:14:44.883713  431413 system_pods.go:89] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:14:44.883737  431413 system_pods.go:89] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:44.883772  431413 system_pods.go:89] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:44.883799  431413 system_pods.go:89] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:44.883818  431413 system_pods.go:89] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:44.883877  431413 system_pods.go:89] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:44.883902  431413 system_pods.go:89] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:44.883921  431413 system_pods.go:89] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:44.883943  431413 system_pods.go:89] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:44.883963  431413 system_pods.go:89] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:44.883998  431413 system_pods.go:89] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:44.884018  431413 system_pods.go:89] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:44.884041  431413 system_pods.go:89] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:44.884079  431413 system_pods.go:89] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:44.884103  431413 system_pods.go:89] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:44.884125  431413 system_pods.go:89] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:44.884146  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.884180  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:44.884203  431413 system_pods.go:89] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Running
	I1013 22:14:44.884233  431413 retry.go:31] will retry after 342.812301ms: missing components: kube-dns
	I1013 22:14:44.981237  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:45.137758  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:45.217052  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:45.217343  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:45.233422  431413 system_pods.go:86] 19 kube-system pods found
	I1013 22:14:45.233526  431413 system_pods.go:89] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:14:45.233598  431413 system_pods.go:89] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:45.233640  431413 system_pods.go:89] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:45.233671  431413 system_pods.go:89] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:45.233695  431413 system_pods.go:89] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:45.233731  431413 system_pods.go:89] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:45.233751  431413 system_pods.go:89] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:45.233771  431413 system_pods.go:89] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:45.233806  431413 system_pods.go:89] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:45.233827  431413 system_pods.go:89] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:45.233848  431413 system_pods.go:89] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:45.233871  431413 system_pods.go:89] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:45.233906  431413 system_pods.go:89] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:45.233934  431413 system_pods.go:89] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:45.233958  431413 system_pods.go:89] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:45.233981  431413 system_pods.go:89] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:45.234015  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:45.234045  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:45.234091  431413 system_pods.go:89] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Running
	I1013 22:14:45.234139  431413 retry.go:31] will retry after 534.817329ms: missing components: kube-dns
	I1013 22:14:45.464484  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:45.642785  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:45.706719  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:45.707021  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:45.775336  431413 system_pods.go:86] 19 kube-system pods found
	I1013 22:14:45.775413  431413 system_pods.go:89] "coredns-66bc5c9577-25z8n" [dd253cd4-c07e-459b-b202-a7fe1a8228ae] Running
	I1013 22:14:45.775444  431413 system_pods.go:89] "csi-hostpath-attacher-0" [0b0a9a01-58cf-432a-986f-3fa5f7c38ecb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 22:14:45.775483  431413 system_pods.go:89] "csi-hostpath-resizer-0" [05ad279b-590d-40c9-bfd0-7f157c89356a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 22:14:45.775510  431413 system_pods.go:89] "csi-hostpathplugin-9mzk9" [d86d1309-1cb6-4448-bddf-dafb5fbf6948] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 22:14:45.775529  431413 system_pods.go:89] "etcd-addons-801288" [0e115222-a922-4585-a76b-01361d481752] Running
	I1013 22:14:45.775549  431413 system_pods.go:89] "kindnet-lqsl4" [f0cd197a-7de9-494a-98e1-9abb604e46b1] Running
	I1013 22:14:45.775588  431413 system_pods.go:89] "kube-apiserver-addons-801288" [c8c953d7-13f7-4b1e-b480-d97e0eb38748] Running
	I1013 22:14:45.775613  431413 system_pods.go:89] "kube-controller-manager-addons-801288" [35435b2f-ac93-45f6-a923-ce10344cca49] Running
	I1013 22:14:45.775633  431413 system_pods.go:89] "kube-ingress-dns-minikube" [ac515736-af66-4e9c-8fe0-f1d64438fd84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 22:14:45.775653  431413 system_pods.go:89] "kube-proxy-8c9vh" [f1861157-a021-4804-81cf-ee0d64f62a0a] Running
	I1013 22:14:45.775672  431413 system_pods.go:89] "kube-scheduler-addons-801288" [73de0b76-4ed9-4d1c-89de-fa94e43fed96] Running
	I1013 22:14:45.775703  431413 system_pods.go:89] "metrics-server-85b7d694d7-5289b" [a6bc08de-f1f3-40ac-8bd0-518abbc48aee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 22:14:45.775728  431413 system_pods.go:89] "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 22:14:45.775752  431413 system_pods.go:89] "registry-6b586f9694-7nvd4" [02be7359-ebe2-4c26-b355-620e5c0014d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 22:14:45.775778  431413 system_pods.go:89] "registry-creds-764b6fb674-2kdj8" [eb708d02-0e37-40d2-a8b8-804e0e89f091] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 22:14:45.775810  431413 system_pods.go:89] "registry-proxy-528wh" [b7657dcd-1445-41df-86af-4c6f104cfdbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 22:14:45.775850  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kbw7j" [80332bf2-9bd6-4054-ad39-ee082964d0bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:45.775871  431413 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ltgt2" [85841153-0d91-4e22-9ccf-f3159ed3bac2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 22:14:45.775891  431413 system_pods.go:89] "storage-provisioner" [de69ba1c-fcc1-4a9a-88b2-1bbc4a0137a2] Running
	I1013 22:14:45.775927  431413 system_pods.go:126] duration metric: took 1.427549017s to wait for k8s-apps to be running ...
	I1013 22:14:45.775954  431413 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:14:45.776038  431413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:14:45.792808  431413 system_svc.go:56] duration metric: took 16.846057ms WaitForService to wait for kubelet
	I1013 22:14:45.792878  431413 kubeadm.go:586] duration metric: took 44.92997648s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:14:45.792912  431413 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:14:45.796717  431413 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:14:45.796797  431413 node_conditions.go:123] node cpu capacity is 2
	I1013 22:14:45.796825  431413 node_conditions.go:105] duration metric: took 3.891311ms to run NodePressure ...
	I1013 22:14:45.796852  431413 start.go:241] waiting for startup goroutines ...
	I1013 22:14:45.964281  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:46.136727  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:46.206446  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:46.206783  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:46.463583  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:46.636367  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:46.705474  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:46.706242  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:46.963254  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:47.136335  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:47.206037  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:47.206413  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:47.463878  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:47.636445  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:47.706356  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:47.706743  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:47.963460  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:48.135853  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:48.205225  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:48.205700  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:48.463948  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:48.636452  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:48.705445  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:48.705579  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:48.963605  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:49.136045  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:49.205746  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:49.205900  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:49.462828  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:49.636105  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:49.705639  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:49.705973  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:49.963232  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:50.136833  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:50.205609  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:50.205740  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:50.463621  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:50.635379  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:50.705439  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:50.705921  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:50.963045  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:51.137026  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:51.206465  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:51.206590  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:51.464761  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:51.644074  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:51.705251  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:51.708291  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:51.964698  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:52.145708  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:52.214939  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:52.217104  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:52.464283  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:52.651663  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:52.714077  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:52.714494  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:52.963598  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:53.137725  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:53.210747  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:53.212299  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:53.463366  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:53.637736  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:53.707499  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:53.708013  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:53.964963  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:54.156250  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:54.254512  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:54.254672  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:54.464844  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:54.639485  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:54.705293  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:54.705449  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:54.964115  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:55.137044  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:55.206108  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:55.206215  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:55.463331  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:55.638157  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:55.706467  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:55.706878  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:55.964154  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:56.136463  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:56.206018  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:56.206274  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:56.463363  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:56.636269  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:56.706625  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:56.706723  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:56.963799  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:57.136323  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:57.205792  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:57.206821  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:57.465573  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:57.635880  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:57.705780  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:57.705934  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:57.962730  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:58.135698  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:58.214140  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:58.215939  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:58.463192  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:58.636212  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:58.706041  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:58.706600  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:58.963754  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:59.136563  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:59.208274  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:59.208917  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:59.462933  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:14:59.636441  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:14:59.706416  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:14:59.707325  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:14:59.764660  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:14:59.963036  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:00.136997  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:00.206554  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:00.207192  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:00.470089  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:00.649265  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:00.763625  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:00.764888  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:00.963979  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:01.137477  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:01.205975  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:01.206557  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:01.275543  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.510786866s)
	W1013 22:15:01.275591  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:15:01.275617  431413 retry.go:31] will retry after 13.273918113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:15:01.463687  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:01.637813  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:01.706972  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:01.708270  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:01.963935  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:02.136974  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:02.206787  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:02.207009  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:02.464472  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:02.637349  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:02.705920  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:02.706053  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:02.963480  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:03.136442  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:03.206615  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:03.207069  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:03.463493  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:03.637066  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:03.707163  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:03.707606  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:03.964338  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:04.137361  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:04.206627  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:04.207154  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:04.464586  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:04.636329  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:04.706136  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:04.706688  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:04.964204  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:05.137368  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:05.206458  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:05.206780  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:05.468518  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:05.636100  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:05.727561  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:05.728128  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:05.964363  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:06.136960  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:06.207298  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:06.208414  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:06.464543  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:06.636583  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:06.708887  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:06.709308  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:06.964130  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:07.136988  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:07.205333  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:07.206367  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:07.463860  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:07.637206  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:07.706725  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:07.706839  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:07.963688  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:08.136443  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:08.205132  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:08.205906  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:08.463783  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:08.636256  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:08.706208  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:08.706667  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:08.964288  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:09.136877  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:09.205970  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:09.206116  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:09.463306  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:09.636455  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:09.705332  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:09.706009  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:09.963303  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:10.138403  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:10.238358  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:10.238553  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:10.463967  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:10.636782  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:10.737220  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:10.737401  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:10.963796  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:11.136621  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:11.206291  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:11.206911  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:11.463987  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:11.636440  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:11.704582  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:11.705350  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:11.963674  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:12.136400  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:12.208275  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:12.208751  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:12.464727  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:12.636251  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:12.704685  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:12.704896  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:12.963046  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:13.136166  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:13.206227  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:13.206750  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:13.464745  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:13.638041  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:13.708483  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:13.708887  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:13.963811  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:14.136983  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:14.206610  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:14.206803  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:14.463851  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:14.550096  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:15:14.635761  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:14.705746  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:14.705796  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:14.964066  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:15.137248  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:15.238086  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:15.238580  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:15.464278  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:15.637185  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:15.707066  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:15.707551  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:15.964657  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:16.047117  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.496931923s)
	W1013 22:15:16.047436  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:15:16.047543  431413 retry.go:31] will retry after 30.313623634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 22:15:16.136776  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:16.206477  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:16.206661  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:16.464166  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:16.636558  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:16.706382  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:16.706653  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:16.963538  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:17.135834  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:17.205677  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:17.205882  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:17.463812  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:17.635982  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:17.705915  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:17.706142  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:17.965755  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:18.136855  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:18.206065  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:18.206680  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:18.464549  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:18.636582  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:18.706361  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:18.707718  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:18.964042  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:19.137013  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:19.206337  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:19.206317  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:19.464348  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:19.636393  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:19.706934  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:19.707241  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:19.963295  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:20.137306  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:20.205981  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:20.206190  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:20.463702  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:20.636155  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:20.706172  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:20.706794  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:20.963940  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:21.137221  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:21.206609  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:21.207005  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:21.463255  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:21.639106  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:21.708792  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:21.709051  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:21.962818  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:22.136747  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:22.206550  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:22.206709  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:22.464410  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:22.636560  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:22.705987  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:22.706121  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:22.962787  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:23.136184  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:23.205210  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:23.205388  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:23.464144  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:23.637187  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:23.706803  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:23.706965  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:23.963407  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:24.135936  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:24.207255  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:24.207477  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:24.464338  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:24.637304  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:24.705552  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:24.705693  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:24.963639  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:25.137881  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:25.205785  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:25.208516  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:25.464021  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:25.636566  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:25.708588  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:25.708964  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:25.965723  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:26.136086  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:26.206057  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:26.206514  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:26.463885  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:26.636250  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:26.704593  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:26.704818  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:26.964038  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:27.136922  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:27.205276  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:27.205539  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:27.463683  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:27.635794  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:27.705837  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:27.706080  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:27.963895  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:28.141430  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:28.206269  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:28.206685  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:28.464165  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:28.636372  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:28.705524  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:28.705647  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:28.964790  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:29.136190  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:29.205936  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:29.206843  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:29.464298  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:29.637377  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:29.706353  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:29.706936  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:29.963148  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:30.137387  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:30.205558  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:30.206545  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:30.487451  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:30.637961  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:30.706300  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:30.706430  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:30.963199  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:31.136976  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:31.205723  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:31.206622  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:31.463712  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:31.636444  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:31.705763  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:31.706250  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:31.963155  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:32.144754  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:32.206088  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:32.206406  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:32.463713  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:32.636001  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:32.704901  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:32.705027  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:32.963263  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:33.136356  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:33.206048  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:33.206142  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:33.463172  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:33.636665  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:33.705271  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:33.705870  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:33.964140  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:34.136475  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:34.205410  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:34.207259  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:34.464133  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:34.636917  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:34.706728  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:34.707228  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:34.963012  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:35.136332  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:35.206909  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:35.206988  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:35.463192  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:35.636968  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:35.705109  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 22:15:35.705301  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:35.963794  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:36.136253  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:36.204547  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:36.205170  431413 kapi.go:107] duration metric: took 1m28.503963664s to wait for kubernetes.io/minikube-addons=registry ...
	I1013 22:15:36.462964  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:36.637480  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:36.705527  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:36.964254  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:37.136380  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:37.204667  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:37.464291  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:37.637300  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:37.705868  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:37.963154  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:38.136798  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:38.204910  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:38.463414  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 22:15:38.638250  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:38.705754  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:38.973342  431413 kapi.go:107] duration metric: took 1m29.0133146s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 22:15:38.976529  431413 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-801288 cluster.
	I1013 22:15:38.979441  431413 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 22:15:38.982404  431413 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 22:15:39.135897  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:39.205213  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:39.636740  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:39.704900  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:40.136853  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:40.205500  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:40.636282  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:40.705340  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:41.136533  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:41.204627  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:41.636962  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:41.705461  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:42.138389  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:42.205429  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:42.637248  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:42.705766  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:43.142922  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:43.207161  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:43.635861  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:43.705437  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:44.139571  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:44.204887  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:44.637009  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:44.705345  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:45.137227  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:45.208221  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:45.636327  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:45.704848  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:46.137934  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:46.206768  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:46.362141  431413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 22:15:46.636890  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:46.738761  431413 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 22:15:47.140503  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:47.207057  431413 kapi.go:107] duration metric: took 1m39.505850505s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 22:15:47.636171  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:47.658633  431413 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.296387256s)
	W1013 22:15:47.658669  431413 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 22:15:47.658751  431413 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 22:15:48.230823  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:48.636920  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:49.137425  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:49.635552  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:50.137143  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:50.638159  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:51.141126  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:51.639473  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:52.136606  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:52.646043  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:53.136228  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:53.636637  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:54.138464  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:54.636346  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:55.136596  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:55.636878  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:56.137221  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:56.636309  431413 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 22:15:57.136198  431413 kapi.go:107] duration metric: took 1m49.003611681s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 22:15:57.139348  431413 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, registry-creds, ingress-dns, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1013 22:15:57.142328  431413 addons.go:514] duration metric: took 1m56.278915408s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner nvidia-device-plugin registry-creds ingress-dns storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1013 22:15:57.142379  431413 start.go:246] waiting for cluster config update ...
	I1013 22:15:57.142400  431413 start.go:255] writing updated cluster config ...
	I1013 22:15:57.142700  431413 ssh_runner.go:195] Run: rm -f paused
	I1013 22:15:57.146295  431413 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:15:57.150126  431413 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-25z8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.155035  431413 pod_ready.go:94] pod "coredns-66bc5c9577-25z8n" is "Ready"
	I1013 22:15:57.155062  431413 pod_ready.go:86] duration metric: took 4.908795ms for pod "coredns-66bc5c9577-25z8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.157566  431413 pod_ready.go:83] waiting for pod "etcd-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.161995  431413 pod_ready.go:94] pod "etcd-addons-801288" is "Ready"
	I1013 22:15:57.162071  431413 pod_ready.go:86] duration metric: took 4.478682ms for pod "etcd-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.164719  431413 pod_ready.go:83] waiting for pod "kube-apiserver-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.169788  431413 pod_ready.go:94] pod "kube-apiserver-addons-801288" is "Ready"
	I1013 22:15:57.169819  431413 pod_ready.go:86] duration metric: took 5.068536ms for pod "kube-apiserver-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.172375  431413 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.550754  431413 pod_ready.go:94] pod "kube-controller-manager-addons-801288" is "Ready"
	I1013 22:15:57.550781  431413 pod_ready.go:86] duration metric: took 378.381672ms for pod "kube-controller-manager-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:57.750753  431413 pod_ready.go:83] waiting for pod "kube-proxy-8c9vh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:58.150695  431413 pod_ready.go:94] pod "kube-proxy-8c9vh" is "Ready"
	I1013 22:15:58.150731  431413 pod_ready.go:86] duration metric: took 399.952442ms for pod "kube-proxy-8c9vh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:58.351247  431413 pod_ready.go:83] waiting for pod "kube-scheduler-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:58.750282  431413 pod_ready.go:94] pod "kube-scheduler-addons-801288" is "Ready"
	I1013 22:15:58.750312  431413 pod_ready.go:86] duration metric: took 399.037827ms for pod "kube-scheduler-addons-801288" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:15:58.750326  431413 pod_ready.go:40] duration metric: took 1.603996798s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:15:58.812267  431413 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:15:58.815531  431413 out.go:179] * Done! kubectl is now configured to use "addons-801288" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 22:15:56 addons-801288 crio[825]: time="2025-10-13T22:15:56.212439034Z" level=info msg="Stopped pod sandbox (already stopped): 1a10e8efd7b30a77328503aab51d81cc1d45ca12fd570cf216adf279e714db24" id=d4fb4f83-249e-4dcc-ac4c-f13e5bda3f03 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 22:15:56 addons-801288 crio[825]: time="2025-10-13T22:15:56.212966966Z" level=info msg="Removing pod sandbox: 1a10e8efd7b30a77328503aab51d81cc1d45ca12fd570cf216adf279e714db24" id=109176c3-da17-4a6d-9fa1-ad4b9ab1d59b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 22:15:56 addons-801288 crio[825]: time="2025-10-13T22:15:56.21807894Z" level=info msg="Removed pod sandbox: 1a10e8efd7b30a77328503aab51d81cc1d45ca12fd570cf216adf279e714db24" id=109176c3-da17-4a6d-9fa1-ad4b9ab1d59b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 22:15:59 addons-801288 crio[825]: time="2025-10-13T22:15:59.997192255Z" level=info msg="Running pod sandbox: default/busybox/POD" id=13b264e8-aafb-486f-9089-ca9da0578803 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:15:59 addons-801288 crio[825]: time="2025-10-13T22:15:59.997262416Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:16:00 addons-801288 crio[825]: time="2025-10-13T22:16:00.014483207Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f6282bc88ae6b948f766b21013269d497bbdadeff2c411ad089fbc562b36ba2f UID:86f7740d-0196-4e9d-b013-8bd776eb1fd8 NetNS:/var/run/netns/49aadf2e-55fa-428c-939f-844463f522b1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001168ee8}] Aliases:map[]}"
	Oct 13 22:16:00 addons-801288 crio[825]: time="2025-10-13T22:16:00.014746316Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 22:16:00 addons-801288 crio[825]: time="2025-10-13T22:16:00.135052951Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f6282bc88ae6b948f766b21013269d497bbdadeff2c411ad089fbc562b36ba2f UID:86f7740d-0196-4e9d-b013-8bd776eb1fd8 NetNS:/var/run/netns/49aadf2e-55fa-428c-939f-844463f522b1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001168ee8}] Aliases:map[]}"
	Oct 13 22:16:00 addons-801288 crio[825]: time="2025-10-13T22:16:00.135486633Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 22:16:00 addons-801288 crio[825]: time="2025-10-13T22:16:00.156396369Z" level=info msg="Ran pod sandbox f6282bc88ae6b948f766b21013269d497bbdadeff2c411ad089fbc562b36ba2f with infra container: default/busybox/POD" id=13b264e8-aafb-486f-9089-ca9da0578803 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:16:00 addons-801288 crio[825]: time="2025-10-13T22:16:00.159850907Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b1d68772-d675-4bd8-83ba-ef95ed5db62f name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:16:00 addons-801288 crio[825]: time="2025-10-13T22:16:00.16029665Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b1d68772-d675-4bd8-83ba-ef95ed5db62f name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:16:00 addons-801288 crio[825]: time="2025-10-13T22:16:00.160418551Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b1d68772-d675-4bd8-83ba-ef95ed5db62f name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:16:00 addons-801288 crio[825]: time="2025-10-13T22:16:00.171360893Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cd17490f-52b2-4b76-827b-d7b01f1156f6 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:16:00 addons-801288 crio[825]: time="2025-10-13T22:16:00.174284536Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 22:16:02 addons-801288 crio[825]: time="2025-10-13T22:16:02.108913664Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=cd17490f-52b2-4b76-827b-d7b01f1156f6 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:16:02 addons-801288 crio[825]: time="2025-10-13T22:16:02.109814791Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a93dc545-1e4c-4598-b1b8-c8472f8477bb name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:16:02 addons-801288 crio[825]: time="2025-10-13T22:16:02.111703927Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9fe86da8-1d1d-4dc5-b47c-9954d96b5bc4 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:16:02 addons-801288 crio[825]: time="2025-10-13T22:16:02.118906683Z" level=info msg="Creating container: default/busybox/busybox" id=aae32b59-c17d-47c3-8f2f-069b5d7f9083 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:16:02 addons-801288 crio[825]: time="2025-10-13T22:16:02.11973175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:16:02 addons-801288 crio[825]: time="2025-10-13T22:16:02.12619189Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:16:02 addons-801288 crio[825]: time="2025-10-13T22:16:02.12689837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:16:02 addons-801288 crio[825]: time="2025-10-13T22:16:02.146709232Z" level=info msg="Created container e8da23ef29e9d053eb8e13df4e07aec79c2db207eaf36bad5e9211de598f82c8: default/busybox/busybox" id=aae32b59-c17d-47c3-8f2f-069b5d7f9083 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:16:02 addons-801288 crio[825]: time="2025-10-13T22:16:02.148166098Z" level=info msg="Starting container: e8da23ef29e9d053eb8e13df4e07aec79c2db207eaf36bad5e9211de598f82c8" id=0441f926-4a75-409d-be1b-5012bfb6c7e4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:16:02 addons-801288 crio[825]: time="2025-10-13T22:16:02.150632593Z" level=info msg="Started container" PID=4947 containerID=e8da23ef29e9d053eb8e13df4e07aec79c2db207eaf36bad5e9211de598f82c8 description=default/busybox/busybox id=0441f926-4a75-409d-be1b-5012bfb6c7e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f6282bc88ae6b948f766b21013269d497bbdadeff2c411ad089fbc562b36ba2f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	e8da23ef29e9d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   f6282bc88ae6b       busybox                                     default
	f153bd237ffa7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	871bc19c45720       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          16 seconds ago       Running             csi-provisioner                          0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	aa5d77a451b8b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	df4b38a9a0c59       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           19 seconds ago       Running             hostpath                                 0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	923be75bce0db       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            20 seconds ago       Running             gadget                                   0                   54d8860d65176       gadget-rhjv9                                gadget
	1d5282a83ae15       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             24 seconds ago       Running             controller                               0                   0187e7c9ea6e5       ingress-nginx-controller-675c5ddd98-g57b8   ingress-nginx
	e2d00394869df       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                31 seconds ago       Running             node-driver-registrar                    0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	8c8b8301b714f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 32 seconds ago       Running             gcp-auth                                 0                   5bd667958e19a       gcp-auth-78565c9fb4-4pzcx                   gcp-auth
	dd6d3965841ed       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              35 seconds ago       Running             registry-proxy                           0                   5929c54b286f8       registry-proxy-528wh                        kube-system
	bacb39f90a23b       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               38 seconds ago       Running             cloud-spanner-emulator                   0                   dca3cc770403c       cloud-spanner-emulator-86bd5cbb97-hskxm     default
	d6de93ce6a1b7       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     44 seconds ago       Running             nvidia-device-plugin-ctr                 0                   85e3b0401215c       nvidia-device-plugin-daemonset-wnwll        kube-system
	3c2edf4d8430b       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           58 seconds ago       Running             registry                                 0                   b404624a65037       registry-6b586f9694-7nvd4                   kube-system
	f6e30d8af3b56       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   77a417c5fe55e       snapshot-controller-7d9fbc56b8-kbw7j        kube-system
	1b60be6e9e6c2       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   4d119e8734ea6       snapshot-controller-7d9fbc56b8-ltgt2        kube-system
	d5134fdc018a5       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   f8d3d47176b06       csi-hostpath-resizer-0                      kube-system
	7c917abe8d5f4       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   a7a08147550a7       csi-hostpath-attacher-0                     kube-system
	7f49cfff22d36       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   812d421b9aa0c       kube-ingress-dns-minikube                   kube-system
	c287771e032e0       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   2536d9e1babfe       local-path-provisioner-648f6765c9-9zzrw     local-path-storage
	473c0a66370cb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              patch                                    0                   09d2a2305791b       ingress-nginx-admission-patch-2rvhh         ingress-nginx
	76b89318f9c3f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   fbc9f92318a2c       ingress-nginx-admission-create-pr575        ingress-nginx
	6cec628f84ed1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   c96236d71ba59       csi-hostpathplugin-9mzk9                    kube-system
	e6995f51e4b11       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   e3633df7f341f       metrics-server-85b7d694d7-5289b             kube-system
	21350e9dbc830       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   0c989c368ef64       yakd-dashboard-5ff678cb9-z9pmq              yakd-dashboard
	1835a21d66fa2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   06167605247a4       coredns-66bc5c9577-25z8n                    kube-system
	c559aae25c459       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   15ed4168fb425       storage-provisioner                         kube-system
	44caccd237f7a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   68752e2dc6f6a       kube-proxy-8c9vh                            kube-system
	225be8120336e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   ce0ea667d9fcb       kindnet-lqsl4                               kube-system
	3c07379b01c2b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   95ac2a1773d77       kube-scheduler-addons-801288                kube-system
	6a94f2e155481       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   0329529ebc88a       kube-controller-manager-addons-801288       kube-system
	6757789a08c6d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   8b7764dc93382       kube-apiserver-addons-801288                kube-system
	ac07affd57c99       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   ea532b566913d       etcd-addons-801288                          kube-system
	
	
	==> coredns [1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d] <==
	[INFO] 10.244.0.14:50844 - 31726 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000080162s
	[INFO] 10.244.0.14:50844 - 19376 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002127491s
	[INFO] 10.244.0.14:50844 - 4761 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001977283s
	[INFO] 10.244.0.14:50844 - 24274 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000101176s
	[INFO] 10.244.0.14:50844 - 52715 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000120359s
	[INFO] 10.244.0.14:42997 - 50720 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000167972s
	[INFO] 10.244.0.14:42997 - 50949 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168743s
	[INFO] 10.244.0.14:52663 - 7414 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122484s
	[INFO] 10.244.0.14:52663 - 7611 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000105212s
	[INFO] 10.244.0.14:51715 - 32418 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098304s
	[INFO] 10.244.0.14:51715 - 32237 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009517s
	[INFO] 10.244.0.14:50625 - 17194 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001211726s
	[INFO] 10.244.0.14:50625 - 17639 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001407209s
	[INFO] 10.244.0.14:42614 - 10998 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011217s
	[INFO] 10.244.0.14:42614 - 10578 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147591s
	[INFO] 10.244.0.19:42533 - 58712 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017397s
	[INFO] 10.244.0.19:51824 - 22655 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000145523s
	[INFO] 10.244.0.19:47677 - 62217 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000246166s
	[INFO] 10.244.0.19:35832 - 54231 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000223257s
	[INFO] 10.244.0.19:37666 - 24937 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014284s
	[INFO] 10.244.0.19:34194 - 30698 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000155648s
	[INFO] 10.244.0.19:51192 - 60390 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002334288s
	[INFO] 10.244.0.19:45068 - 49505 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001931622s
	[INFO] 10.244.0.19:50199 - 21187 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001753902s
	[INFO] 10.244.0.19:39136 - 15447 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00181306s
	
	
	==> describe nodes <==
	Name:               addons-801288
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-801288
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=addons-801288
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_13_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-801288
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-801288"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:13:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-801288
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:16:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:16:08 +0000   Mon, 13 Oct 2025 22:13:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:16:08 +0000   Mon, 13 Oct 2025 22:13:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:16:08 +0000   Mon, 13 Oct 2025 22:13:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:16:08 +0000   Mon, 13 Oct 2025 22:14:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-801288
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                16256a10-42da-4126-a586-4dbee9443032
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-86bd5cbb97-hskxm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  gadget                      gadget-rhjv9                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  gcp-auth                    gcp-auth-78565c9fb4-4pzcx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-g57b8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m4s
	  kube-system                 coredns-66bc5c9577-25z8n                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m10s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 csi-hostpathplugin-9mzk9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 etcd-addons-801288                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m15s
	  kube-system                 kindnet-lqsl4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m10s
	  kube-system                 kube-apiserver-addons-801288                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-controller-manager-addons-801288        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-8c9vh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-addons-801288                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 metrics-server-85b7d694d7-5289b              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m6s
	  kube-system                 nvidia-device-plugin-daemonset-wnwll         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 registry-6b586f9694-7nvd4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 registry-creds-764b6fb674-2kdj8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 registry-proxy-528wh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 snapshot-controller-7d9fbc56b8-kbw7j         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 snapshot-controller-7d9fbc56b8-ltgt2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  local-path-storage          local-path-provisioner-648f6765c9-9zzrw      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-z9pmq               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m8s                   kube-proxy       
	  Normal   Starting                 2m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node addons-801288 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node addons-801288 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s (x8 over 2m22s)  kubelet          Node addons-801288 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m15s                  kubelet          Node addons-801288 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m15s                  kubelet          Node addons-801288 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m15s                  kubelet          Node addons-801288 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m11s                  node-controller  Node addons-801288 event: Registered Node addons-801288 in Controller
	  Normal   NodeReady                88s                    kubelet          Node addons-801288 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct13 21:01] hrtimer: interrupt took 13518544 ns
	[Oct13 22:12] kauditd_printk_skb: 8 callbacks suppressed
	[Oct13 22:13] overlayfs: idmapped layers are currently not supported
	[  +0.064178] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825] <==
	{"level":"warn","ts":"2025-10-13T22:13:52.584040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.601335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.619360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.637253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.659817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.677561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.693646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.712277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.741320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.747797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.772714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.787184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.800825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.815176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.829827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.857604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.872595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.890406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:13:52.952309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:08.337331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:08.354612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:30.644392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:30.660387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:30.696971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:14:30.701056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55780","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [8c8b8301b714fa679f1c44dfb22722f50e8de8ee4701d1c90f7528f1db1ff614] <==
	2025/10/13 22:15:38 GCP Auth Webhook started!
	2025/10/13 22:15:59 Ready to marshal response ...
	2025/10/13 22:15:59 Ready to write response ...
	2025/10/13 22:15:59 Ready to marshal response ...
	2025/10/13 22:15:59 Ready to write response ...
	2025/10/13 22:15:59 Ready to marshal response ...
	2025/10/13 22:15:59 Ready to write response ...
	
	
	==> kernel <==
	 22:16:11 up  1:58,  0 user,  load average: 2.58, 3.54, 3.96
	Linux addons-801288 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b] <==
	E1013 22:14:32.505230       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 22:14:32.505233       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:14:32.505353       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 22:14:32.505492       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1013 22:14:34.205130       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:14:34.205269       1 metrics.go:72] Registering metrics
	I1013 22:14:34.205428       1 controller.go:711] "Syncing nftables rules"
	I1013 22:14:42.509530       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:14:42.509587       1 main.go:301] handling current node
	I1013 22:14:52.507158       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:14:52.507219       1 main.go:301] handling current node
	I1013 22:15:02.505014       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:15:02.505040       1 main.go:301] handling current node
	I1013 22:15:12.504717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:15:12.504856       1 main.go:301] handling current node
	I1013 22:15:22.504306       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:15:22.504340       1 main.go:301] handling current node
	I1013 22:15:32.507385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:15:32.507420       1 main.go:301] handling current node
	I1013 22:15:42.504833       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:15:42.504922       1 main.go:301] handling current node
	I1013 22:15:52.504077       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:15:52.504232       1 main.go:301] handling current node
	I1013 22:16:02.503987       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:16:02.504050       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a] <==
	W1013 22:14:08.336698       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 22:14:08.352402       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1013 22:14:09.842841       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.97.55.156"}
	W1013 22:14:30.644360       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 22:14:30.660396       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 22:14:30.686365       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1013 22:14:30.700521       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 22:14:43.077311       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.55.156:443: connect: connection refused
	E1013 22:14:43.077369       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.55.156:443: connect: connection refused" logger="UnhandledError"
	W1013 22:14:43.078127       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.55.156:443: connect: connection refused
	E1013 22:14:43.078178       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.55.156:443: connect: connection refused" logger="UnhandledError"
	W1013 22:14:43.154332       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.55.156:443: connect: connection refused
	E1013 22:14:43.154483       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.55.156:443: connect: connection refused" logger="UnhandledError"
	E1013 22:14:54.346246       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.217.225:443: connect: connection refused" logger="UnhandledError"
	W1013 22:14:54.346445       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 22:14:54.346545       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1013 22:14:54.347675       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.217.225:443: connect: connection refused" logger="UnhandledError"
	E1013 22:14:54.352814       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.217.225:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.217.225:443: connect: connection refused" logger="UnhandledError"
	I1013 22:14:54.451983       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1013 22:16:08.936101       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48736: use of closed network connection
	E1013 22:16:09.160865       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48766: use of closed network connection
	E1013 22:16:09.295879       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48772: use of closed network connection
	
	
	==> kube-controller-manager [6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4] <==
	I1013 22:14:00.648969       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:14:00.651327       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:14:00.656613       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:14:00.659753       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 22:14:00.670123       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:14:00.670211       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:14:00.670228       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:14:00.671075       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:14:00.670547       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:14:00.670561       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:14:00.671173       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:14:00.671192       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:14:00.670250       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:14:00.677249       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:14:00.677357       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:14:00.677390       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1013 22:14:05.700791       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1013 22:14:30.636690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 22:14:30.636844       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1013 22:14:30.636891       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1013 22:14:30.667256       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1013 22:14:30.671796       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1013 22:14:30.737425       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:14:30.773159       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:14:45.625797       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb] <==
	I1013 22:14:02.602877       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:14:02.687287       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:14:02.788224       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:14:02.788259       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 22:14:02.788339       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:14:02.818064       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:14:02.818128       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:14:02.825005       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:14:02.825268       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:14:02.825281       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:14:02.826712       1 config.go:200] "Starting service config controller"
	I1013 22:14:02.826722       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:14:02.826748       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:14:02.826752       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:14:02.826763       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:14:02.826770       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:14:02.828800       1 config.go:309] "Starting node config controller"
	I1013 22:14:02.828811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:14:02.828817       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:14:02.927145       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:14:02.927179       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:14:02.927213       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641] <==
	E1013 22:13:53.680771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 22:13:53.680827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 22:13:53.680885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 22:13:53.686609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:13:53.686707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 22:13:53.686807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 22:13:53.686863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 22:13:53.686914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 22:13:53.686962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:13:53.687005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 22:13:53.687093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:13:53.687148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 22:13:53.687197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 22:13:53.687246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 22:13:53.687325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 22:13:54.593673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 22:13:54.629187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 22:13:54.649321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:13:54.650019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:13:54.685983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 22:13:54.760791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 22:13:54.869763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1013 22:13:54.891711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:13:54.902550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1013 22:13:57.232134       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:15:22 addons-801288 kubelet[1257]: I1013 22:15:22.234221    1257 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a1af25-a416-474e-99dd-5cdf13ad1450" path="/var/lib/kubelet/pods/60a1af25-a416-474e-99dd-5cdf13ad1450/volumes"
	Oct 13 22:15:26 addons-801288 kubelet[1257]: I1013 22:15:26.237203    1257 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767413ca-c3d9-4b56-893b-38f7a015351b" path="/var/lib/kubelet/pods/767413ca-c3d9-4b56-893b-38f7a015351b/volumes"
	Oct 13 22:15:26 addons-801288 kubelet[1257]: I1013 22:15:26.840805    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wnwll" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:15:26 addons-801288 kubelet[1257]: I1013 22:15:26.857533    1257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-wnwll" podStartSLOduration=2.142410974 podStartE2EDuration="43.857516138s" podCreationTimestamp="2025-10-13 22:14:43 +0000 UTC" firstStartedPulling="2025-10-13 22:14:44.479302132 +0000 UTC m=+48.407807817" lastFinishedPulling="2025-10-13 22:15:26.194407205 +0000 UTC m=+90.122912981" observedRunningTime="2025-10-13 22:15:26.856236549 +0000 UTC m=+90.784742243" watchObservedRunningTime="2025-10-13 22:15:26.857516138 +0000 UTC m=+90.786021832"
	Oct 13 22:15:27 addons-801288 kubelet[1257]: I1013 22:15:27.844604    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-wnwll" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:15:32 addons-801288 kubelet[1257]: I1013 22:15:32.875062    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-86bd5cbb97-hskxm" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:15:33 addons-801288 kubelet[1257]: I1013 22:15:33.878524    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-86bd5cbb97-hskxm" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:15:35 addons-801288 kubelet[1257]: I1013 22:15:35.886046    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-528wh" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:15:35 addons-801288 kubelet[1257]: I1013 22:15:35.900811    1257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/cloud-spanner-emulator-86bd5cbb97-hskxm" podStartSLOduration=44.23483604 podStartE2EDuration="1m31.900790424s" podCreationTimestamp="2025-10-13 22:14:04 +0000 UTC" firstStartedPulling="2025-10-13 22:14:44.497369941 +0000 UTC m=+48.425875627" lastFinishedPulling="2025-10-13 22:15:32.163324317 +0000 UTC m=+96.091830011" observedRunningTime="2025-10-13 22:15:32.889570764 +0000 UTC m=+96.818076450" watchObservedRunningTime="2025-10-13 22:15:35.900790424 +0000 UTC m=+99.829296110"
	Oct 13 22:15:36 addons-801288 kubelet[1257]: I1013 22:15:36.889786    1257 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-528wh" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 22:15:38 addons-801288 kubelet[1257]: I1013 22:15:38.912494    1257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-528wh" podStartSLOduration=4.973858886 podStartE2EDuration="55.912475687s" podCreationTimestamp="2025-10-13 22:14:43 +0000 UTC" firstStartedPulling="2025-10-13 22:14:44.588338086 +0000 UTC m=+48.516843771" lastFinishedPulling="2025-10-13 22:15:35.526954886 +0000 UTC m=+99.455460572" observedRunningTime="2025-10-13 22:15:35.902518874 +0000 UTC m=+99.831024593" watchObservedRunningTime="2025-10-13 22:15:38.912475687 +0000 UTC m=+102.840981372"
	Oct 13 22:15:47 addons-801288 kubelet[1257]: I1013 22:15:47.010475    1257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-4pzcx" podStartSLOduration=50.573825936 podStartE2EDuration="1m38.010445641s" podCreationTimestamp="2025-10-13 22:14:09 +0000 UTC" firstStartedPulling="2025-10-13 22:14:51.305657291 +0000 UTC m=+55.234162977" lastFinishedPulling="2025-10-13 22:15:38.742276996 +0000 UTC m=+102.670782682" observedRunningTime="2025-10-13 22:15:38.913955223 +0000 UTC m=+102.842460908" watchObservedRunningTime="2025-10-13 22:15:47.010445641 +0000 UTC m=+110.938951335"
	Oct 13 22:15:47 addons-801288 kubelet[1257]: E1013 22:15:47.216341    1257 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 13 22:15:47 addons-801288 kubelet[1257]: E1013 22:15:47.216434    1257 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb708d02-0e37-40d2-a8b8-804e0e89f091-gcr-creds podName:eb708d02-0e37-40d2-a8b8-804e0e89f091 nodeName:}" failed. No retries permitted until 2025-10-13 22:16:51.216415911 +0000 UTC m=+175.144921597 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/eb708d02-0e37-40d2-a8b8-804e0e89f091-gcr-creds") pod "registry-creds-764b6fb674-2kdj8" (UID: "eb708d02-0e37-40d2-a8b8-804e0e89f091") : secret "registry-creds-gcr" not found
	Oct 13 22:15:51 addons-801288 kubelet[1257]: I1013 22:15:51.016129    1257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-g57b8" podStartSLOduration=62.667486922 podStartE2EDuration="1m44.016106555s" podCreationTimestamp="2025-10-13 22:14:07 +0000 UTC" firstStartedPulling="2025-10-13 22:15:05.291419154 +0000 UTC m=+69.219924839" lastFinishedPulling="2025-10-13 22:15:46.640038786 +0000 UTC m=+110.568544472" observedRunningTime="2025-10-13 22:15:47.012001024 +0000 UTC m=+110.940506726" watchObservedRunningTime="2025-10-13 22:15:51.016106555 +0000 UTC m=+114.944612249"
	Oct 13 22:15:53 addons-801288 kubelet[1257]: I1013 22:15:53.408523    1257 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 13 22:15:53 addons-801288 kubelet[1257]: I1013 22:15:53.408581    1257 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 13 22:15:55 addons-801288 kubelet[1257]: I1013 22:15:55.800730    1257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-rhjv9" podStartSLOduration=70.046316788 podStartE2EDuration="1m49.800712909s" podCreationTimestamp="2025-10-13 22:14:06 +0000 UTC" firstStartedPulling="2025-10-13 22:15:10.659299817 +0000 UTC m=+74.587805511" lastFinishedPulling="2025-10-13 22:15:50.413695725 +0000 UTC m=+114.342201632" observedRunningTime="2025-10-13 22:15:51.019138298 +0000 UTC m=+114.947644116" watchObservedRunningTime="2025-10-13 22:15:55.800712909 +0000 UTC m=+119.729218603"
	Oct 13 22:15:56 addons-801288 kubelet[1257]: I1013 22:15:56.171612    1257 scope.go:117] "RemoveContainer" containerID="46946e66e6fd73112db88d1ee48566f19d85dfeee1ea6e7bcb41463595ab8ead"
	Oct 13 22:15:56 addons-801288 kubelet[1257]: I1013 22:15:56.185192    1257 scope.go:117] "RemoveContainer" containerID="8e96fc61a8012d9808664daebf4fc9a95ead131e0d47ea9be454a0eff819e8ca"
	Oct 13 22:15:57 addons-801288 kubelet[1257]: I1013 22:15:57.034056    1257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-9mzk9" podStartSLOduration=1.931533307 podStartE2EDuration="1m14.034037685s" podCreationTimestamp="2025-10-13 22:14:43 +0000 UTC" firstStartedPulling="2025-10-13 22:14:43.855568248 +0000 UTC m=+47.784073950" lastFinishedPulling="2025-10-13 22:15:55.958072633 +0000 UTC m=+119.886578328" observedRunningTime="2025-10-13 22:15:57.033276149 +0000 UTC m=+120.961781843" watchObservedRunningTime="2025-10-13 22:15:57.034037685 +0000 UTC m=+120.962543379"
	Oct 13 22:15:59 addons-801288 kubelet[1257]: I1013 22:15:59.732084    1257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/86f7740d-0196-4e9d-b013-8bd776eb1fd8-gcp-creds\") pod \"busybox\" (UID: \"86f7740d-0196-4e9d-b013-8bd776eb1fd8\") " pod="default/busybox"
	Oct 13 22:15:59 addons-801288 kubelet[1257]: I1013 22:15:59.732152    1257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzvp6\" (UniqueName: \"kubernetes.io/projected/86f7740d-0196-4e9d-b013-8bd776eb1fd8-kube-api-access-pzvp6\") pod \"busybox\" (UID: \"86f7740d-0196-4e9d-b013-8bd776eb1fd8\") " pod="default/busybox"
	Oct 13 22:16:00 addons-801288 kubelet[1257]: W1013 22:16:00.155017    1257 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bcc7adeb9dda1be0d08128703f2d95ede18b9036dc97bfc20e8cb903d557b077/crio-f6282bc88ae6b948f766b21013269d497bbdadeff2c411ad089fbc562b36ba2f WatchSource:0}: Error finding container f6282bc88ae6b948f766b21013269d497bbdadeff2c411ad089fbc562b36ba2f: Status 404 returned error can't find the container with id f6282bc88ae6b948f766b21013269d497bbdadeff2c411ad089fbc562b36ba2f
	Oct 13 22:16:03 addons-801288 kubelet[1257]: I1013 22:16:03.080655    1257 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.130804957 podStartE2EDuration="4.080636044s" podCreationTimestamp="2025-10-13 22:15:59 +0000 UTC" firstStartedPulling="2025-10-13 22:16:00.160890082 +0000 UTC m=+124.089395768" lastFinishedPulling="2025-10-13 22:16:02.110721169 +0000 UTC m=+126.039226855" observedRunningTime="2025-10-13 22:16:03.08010524 +0000 UTC m=+127.008610934" watchObservedRunningTime="2025-10-13 22:16:03.080636044 +0000 UTC m=+127.009141730"
	
	
	==> storage-provisioner [c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b] <==
	W1013 22:15:46.559665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:48.564639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:48.569739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:50.573461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:50.582785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:52.591308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:52.597411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:54.600847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:54.608023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:56.610825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:56.615126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:58.624411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:15:58.631385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:00.634831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:00.641989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:02.645067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:02.649673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:04.653331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:04.660157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:06.662668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:06.667604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:08.670802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:08.675891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:10.679499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:16:10.688914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-801288 -n addons-801288
helpers_test.go:269: (dbg) Run:  kubectl --context addons-801288 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-pr575 ingress-nginx-admission-patch-2rvhh registry-creds-764b6fb674-2kdj8
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-801288 describe pod ingress-nginx-admission-create-pr575 ingress-nginx-admission-patch-2rvhh registry-creds-764b6fb674-2kdj8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-801288 describe pod ingress-nginx-admission-create-pr575 ingress-nginx-admission-patch-2rvhh registry-creds-764b6fb674-2kdj8: exit status 1 (88.690987ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pr575" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2rvhh" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-2kdj8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-801288 describe pod ingress-nginx-admission-create-pr575 ingress-nginx-admission-patch-2rvhh registry-creds-764b6fb674-2kdj8: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable headlamp --alsologtostderr -v=1: exit status 11 (259.170179ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:16:12.577461  437956 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:16:12.578243  437956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:12.578266  437956 out.go:374] Setting ErrFile to fd 2...
	I1013 22:16:12.578272  437956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:12.578644  437956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:16:12.578998  437956 mustload.go:65] Loading cluster: addons-801288
	I1013 22:16:12.579539  437956 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:12.579577  437956 addons.go:606] checking whether the cluster is paused
	I1013 22:16:12.579799  437956 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:12.579848  437956 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:16:12.580380  437956 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:16:12.597795  437956 ssh_runner.go:195] Run: systemctl --version
	I1013 22:16:12.597858  437956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:16:12.619441  437956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:16:12.722313  437956 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:16:12.722407  437956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:16:12.752685  437956 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:16:12.752713  437956 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:16:12.752722  437956 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:16:12.752726  437956 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:16:12.752729  437956 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:16:12.752733  437956 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:16:12.752737  437956 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:16:12.752740  437956 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:16:12.752743  437956 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:16:12.752749  437956 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:16:12.752752  437956 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:16:12.752756  437956 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:16:12.752758  437956 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:16:12.752761  437956 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:16:12.752764  437956 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:16:12.752769  437956 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:16:12.752773  437956 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:16:12.752776  437956 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:16:12.752779  437956 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:16:12.752782  437956 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:16:12.752786  437956 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:16:12.752789  437956 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:16:12.752792  437956 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:16:12.752795  437956 cri.go:89] found id: ""
	I1013 22:16:12.752847  437956 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:16:12.768484  437956 out.go:203] 
	W1013 22:16:12.771417  437956 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:16:12.771444  437956 out.go:285] * 
	* 
	W1013 22:16:12.778185  437956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:16:12.781046  437956 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.18s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-hskxm" [4c723ec4-7ac2-41d9-b5b6-b91dd22b04eb] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003243852s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (274.098711ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:17:16.836942  439822 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:17:16.837704  439822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:17:16.837719  439822 out.go:374] Setting ErrFile to fd 2...
	I1013 22:17:16.837724  439822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:17:16.838019  439822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:17:16.838330  439822 mustload.go:65] Loading cluster: addons-801288
	I1013 22:17:16.838691  439822 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:17:16.838710  439822 addons.go:606] checking whether the cluster is paused
	I1013 22:17:16.838815  439822 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:17:16.838838  439822 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:17:16.839349  439822 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:17:16.856668  439822 ssh_runner.go:195] Run: systemctl --version
	I1013 22:17:16.856735  439822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:17:16.873853  439822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:17:16.983716  439822 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:17:16.983818  439822 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:17:17.028014  439822 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:17:17.028038  439822 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:17:17.028044  439822 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:17:17.028048  439822 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:17:17.028051  439822 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:17:17.028055  439822 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:17:17.028058  439822 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:17:17.028061  439822 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:17:17.028064  439822 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:17:17.028070  439822 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:17:17.028073  439822 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:17:17.028081  439822 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:17:17.028084  439822 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:17:17.028087  439822 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:17:17.028090  439822 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:17:17.028095  439822 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:17:17.028098  439822 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:17:17.028101  439822 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:17:17.028104  439822 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:17:17.028107  439822 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:17:17.028112  439822 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:17:17.028115  439822 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:17:17.028118  439822 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:17:17.028121  439822 cri.go:89] found id: ""
	I1013 22:17:17.028173  439822 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:17:17.043285  439822 out.go:203] 
	W1013 22:17:17.046249  439822 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:17:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:17:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:17:17.046270  439822 out.go:285] * 
	* 
	W1013 22:17:17.053090  439822 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:17:17.055895  439822 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-801288 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-801288 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-801288 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [811fce7e-9dc5-418f-a784-66622cfbbf0d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [811fce7e-9dc5-418f-a784-66622cfbbf0d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [811fce7e-9dc5-418f-a784-66622cfbbf0d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004207679s
addons_test.go:967: (dbg) Run:  kubectl --context addons-801288 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 ssh "cat /opt/local-path-provisioner/pvc-85b9cd0c-3387-41b6-94c8-0436514e03ca_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-801288 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-801288 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (277.752167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:17:11.554736  439716 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:17:11.555540  439716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:17:11.555582  439716 out.go:374] Setting ErrFile to fd 2...
	I1013 22:17:11.555605  439716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:17:11.555922  439716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:17:11.556255  439716 mustload.go:65] Loading cluster: addons-801288
	I1013 22:17:11.556671  439716 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:17:11.556716  439716 addons.go:606] checking whether the cluster is paused
	I1013 22:17:11.556843  439716 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:17:11.556887  439716 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:17:11.557378  439716 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:17:11.575037  439716 ssh_runner.go:195] Run: systemctl --version
	I1013 22:17:11.575121  439716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:17:11.593130  439716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:17:11.706966  439716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:17:11.707106  439716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:17:11.739169  439716 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:17:11.739188  439716 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:17:11.739192  439716 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:17:11.739196  439716 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:17:11.739199  439716 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:17:11.739203  439716 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:17:11.739206  439716 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:17:11.739213  439716 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:17:11.739217  439716 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:17:11.739226  439716 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:17:11.739229  439716 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:17:11.739232  439716 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:17:11.739235  439716 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:17:11.739238  439716 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:17:11.739241  439716 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:17:11.739249  439716 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:17:11.739252  439716 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:17:11.739257  439716 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:17:11.739260  439716 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:17:11.739263  439716 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:17:11.739267  439716 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:17:11.739270  439716 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:17:11.739273  439716 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:17:11.739276  439716 cri.go:89] found id: ""
	I1013 22:17:11.739336  439716 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:17:11.760212  439716 out.go:203] 
	W1013 22:17:11.763199  439716 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:17:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:17:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:17:11.763232  439716 out.go:285] * 
	* 
	W1013 22:17:11.769830  439716 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:17:11.773049  439716 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-wnwll" [11ce6e30-6f43-49c6-847f-52321d5615db] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003322198s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (274.723536ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:16:57.844359  439345 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:16:57.845266  439345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:57.845315  439345 out.go:374] Setting ErrFile to fd 2...
	I1013 22:16:57.845336  439345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:16:57.845625  439345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:16:57.845955  439345 mustload.go:65] Loading cluster: addons-801288
	I1013 22:16:57.846412  439345 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:57.846456  439345 addons.go:606] checking whether the cluster is paused
	I1013 22:16:57.846591  439345 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:16:57.846631  439345 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:16:57.847216  439345 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:16:57.869507  439345 ssh_runner.go:195] Run: systemctl --version
	I1013 22:16:57.869688  439345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:16:57.890545  439345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:16:57.993558  439345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:16:57.993639  439345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:16:58.036000  439345 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:16:58.036020  439345 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:16:58.036025  439345 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:16:58.036029  439345 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:16:58.036032  439345 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:16:58.036035  439345 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:16:58.036038  439345 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:16:58.036043  439345 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:16:58.036046  439345 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:16:58.036057  439345 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:16:58.036064  439345 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:16:58.036068  439345 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:16:58.036071  439345 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:16:58.036082  439345 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:16:58.036086  439345 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:16:58.036091  439345 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:16:58.036095  439345 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:16:58.036099  439345 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:16:58.036102  439345 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:16:58.036105  439345 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:16:58.036110  439345 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:16:58.036116  439345 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:16:58.036120  439345 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:16:58.036123  439345 cri.go:89] found id: ""
	I1013 22:16:58.036185  439345 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:16:58.054371  439345 out.go:203] 
	W1013 22:16:58.057385  439345 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:16:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:16:58.057413  439345 out.go:285] * 
	* 
	W1013 22:16:58.064172  439345 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:16:58.067429  439345 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-z9pmq" [b28ed181-4530-49e5-93c9-eb5b9d33f91e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004784918s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-801288 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-801288 addons disable yakd --alsologtostderr -v=1: exit status 11 (268.106052ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:17:03.131292  439417 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:17:03.132384  439417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:17:03.132438  439417 out.go:374] Setting ErrFile to fd 2...
	I1013 22:17:03.132461  439417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:17:03.132812  439417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:17:03.133238  439417 mustload.go:65] Loading cluster: addons-801288
	I1013 22:17:03.133692  439417 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:17:03.133741  439417 addons.go:606] checking whether the cluster is paused
	I1013 22:17:03.133872  439417 config.go:182] Loaded profile config "addons-801288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:17:03.133922  439417 host.go:66] Checking if "addons-801288" exists ...
	I1013 22:17:03.134472  439417 cli_runner.go:164] Run: docker container inspect addons-801288 --format={{.State.Status}}
	I1013 22:17:03.153935  439417 ssh_runner.go:195] Run: systemctl --version
	I1013 22:17:03.153992  439417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-801288
	I1013 22:17:03.173611  439417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/addons-801288/id_rsa Username:docker}
	I1013 22:17:03.278550  439417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:17:03.278642  439417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:17:03.310313  439417 cri.go:89] found id: "f153bd237ffa75b3e24d87a2161e0dfcef9dbee69a77f314b465dac77eed02fd"
	I1013 22:17:03.310336  439417 cri.go:89] found id: "871bc19c4572097109a201036357418dadc32ac851ef1bfed1ba6748b145f7a9"
	I1013 22:17:03.310341  439417 cri.go:89] found id: "aa5d77a451b8bc96fc43eb7ef4915780b90ee45438c266b7b4514b841fb5278e"
	I1013 22:17:03.310345  439417 cri.go:89] found id: "df4b38a9a0c595175cf2376ccd8cb19983eb68d3fc92c400239735c9a051711f"
	I1013 22:17:03.310349  439417 cri.go:89] found id: "e2d00394869df1c013c36881c7c2fcb41dce9aaff387fd1a489aab83437d7570"
	I1013 22:17:03.310353  439417 cri.go:89] found id: "dd6d3965841ed91f44bbab0d92b0c932c45cce09a3953c95b4f8197e4764ca07"
	I1013 22:17:03.310357  439417 cri.go:89] found id: "d6de93ce6a1b713edf2af03974c506a880db4ad0f7fce5ae7da36191c854f1fc"
	I1013 22:17:03.310360  439417 cri.go:89] found id: "3c2edf4d8430b97d65e5e87102ee6f42854e2a97ed7f5f7ef42a87b42ddec401"
	I1013 22:17:03.310364  439417 cri.go:89] found id: "f6e30d8af3b56354ced4163604d983a30ba222509fc72cab7c7c2c52a88218f0"
	I1013 22:17:03.310370  439417 cri.go:89] found id: "1b60be6e9e6c2d638590b09f12d8236c0dcfffcd84bd0b2b387c3ecb9104d48b"
	I1013 22:17:03.310374  439417 cri.go:89] found id: "d5134fdc018a5aec875ba7c9cf15b8a78049ee51c386e12ee88a21cc9dd372f2"
	I1013 22:17:03.310377  439417 cri.go:89] found id: "7c917abe8d5f4f45d0ae18b9584f8e0b92552ffdec36f851235971305600c8cd"
	I1013 22:17:03.310380  439417 cri.go:89] found id: "7f49cfff22d36babdd17e8b09bfc7472bb4ae94b0f9a2e8d5b126604c918c4d0"
	I1013 22:17:03.310384  439417 cri.go:89] found id: "6cec628f84ed1fcd528aa5f29cd424a8ebcba08dfd90b0a5f39d06ba67b60324"
	I1013 22:17:03.310387  439417 cri.go:89] found id: "e6995f51e4b119d22c3f8e3fc60487fa080656c377ec6263a22ebba7625e8a84"
	I1013 22:17:03.310397  439417 cri.go:89] found id: "1835a21d66fa25cc966b5de5331a3cbf4e2752b89085557ffb13d143a649963d"
	I1013 22:17:03.310401  439417 cri.go:89] found id: "c559aae25c45981f41fb5ca304fc706f47e0efd120c7b253dd8e87d55dc2418b"
	I1013 22:17:03.310406  439417 cri.go:89] found id: "44caccd237f7ab1125b6f139583fa8c7bc1255dbe61996013705d688ca7e1dbb"
	I1013 22:17:03.310409  439417 cri.go:89] found id: "225be8120336e63a288420a6838adc3b97eb1cbf17c2ca7239015049e4e3081b"
	I1013 22:17:03.310412  439417 cri.go:89] found id: "3c07379b01c2b7932b73fbd28b7f6702a01b23eef9da51bb024010d1a0e98641"
	I1013 22:17:03.310417  439417 cri.go:89] found id: "6a94f2e155481d737a9667e1e272697aaebbb7e6c71106554f704df08028cda4"
	I1013 22:17:03.310420  439417 cri.go:89] found id: "6757789a08c6d2ef0c2a56b251f559a4a204148aa5c60c704c9de606dc232d6a"
	I1013 22:17:03.310423  439417 cri.go:89] found id: "ac07affd57c9964f5fef09b2c963f0ee34a552a57f6b3c843487270baa447825"
	I1013 22:17:03.310426  439417 cri.go:89] found id: ""
	I1013 22:17:03.310478  439417 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:17:03.329067  439417 out.go:203] 
	W1013 22:17:03.332148  439417 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:17:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:17:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:17:03.332182  439417 out.go:285] * 
	* 
	W1013 22:17:03.339097  439417 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:17:03.342250  439417 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-801288 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-544242 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-544242 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-c666d" [73896e22-ffc7-4f50-82bd-a27eb7bd3d49] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-544242 -n functional-544242
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-13 22:33:13.982157366 +0000 UTC m=+1227.645714516
functional_test.go:1645: (dbg) Run:  kubectl --context functional-544242 describe po hello-node-connect-7d85dfc575-c666d -n default
functional_test.go:1645: (dbg) kubectl --context functional-544242 describe po hello-node-connect-7d85dfc575-c666d -n default:
Name:             hello-node-connect-7d85dfc575-c666d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-544242/192.168.49.2
Start Time:       Mon, 13 Oct 2025 22:23:13 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q74qz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-q74qz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-c666d to functional-544242
Normal   Pulling    6m52s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m52s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m34s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-544242 logs hello-node-connect-7d85dfc575-c666d -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-544242 logs hello-node-connect-7d85dfc575-c666d -n default: exit status 1 (106.336323ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-c666d" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-544242 logs hello-node-connect-7d85dfc575-c666d -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-544242 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-c666d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-544242/192.168.49.2
Start Time:       Mon, 13 Oct 2025 22:23:13 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q74qz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-q74qz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-c666d to functional-544242
Normal   Pulling    6m52s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m52s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m34s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-544242 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-544242 logs -l app=hello-node-connect: exit status 1 (95.062662ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-c666d" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-544242 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-544242 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.8.92
IPs:                      10.106.8.92
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30345/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-544242
helpers_test.go:243: (dbg) docker inspect functional-544242:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1012722eb9fc71d3a48101b69b67a7919dfc56c76cd0874eafb53fa15f3b7b59",
	        "Created": "2025-10-13T22:20:21.208413577Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 446197,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:20:21.270954064Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/1012722eb9fc71d3a48101b69b67a7919dfc56c76cd0874eafb53fa15f3b7b59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1012722eb9fc71d3a48101b69b67a7919dfc56c76cd0874eafb53fa15f3b7b59/hostname",
	        "HostsPath": "/var/lib/docker/containers/1012722eb9fc71d3a48101b69b67a7919dfc56c76cd0874eafb53fa15f3b7b59/hosts",
	        "LogPath": "/var/lib/docker/containers/1012722eb9fc71d3a48101b69b67a7919dfc56c76cd0874eafb53fa15f3b7b59/1012722eb9fc71d3a48101b69b67a7919dfc56c76cd0874eafb53fa15f3b7b59-json.log",
	        "Name": "/functional-544242",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-544242:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-544242",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1012722eb9fc71d3a48101b69b67a7919dfc56c76cd0874eafb53fa15f3b7b59",
	                "LowerDir": "/var/lib/docker/overlay2/ede5e016f305ae1cb2c5de46804431205fb0c69248407079279a28f09e377ddf-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ede5e016f305ae1cb2c5de46804431205fb0c69248407079279a28f09e377ddf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ede5e016f305ae1cb2c5de46804431205fb0c69248407079279a28f09e377ddf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ede5e016f305ae1cb2c5de46804431205fb0c69248407079279a28f09e377ddf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-544242",
	                "Source": "/var/lib/docker/volumes/functional-544242/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-544242",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-544242",
	                "name.minikube.sigs.k8s.io": "functional-544242",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ca995b8537671eee5d747dbf98288b3a8c4455dbe5225890e90f7c3e1fb8e136",
	            "SandboxKey": "/var/run/docker/netns/ca995b853767",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-544242": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:a5:3d:5a:44:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f343faba690423390b7a8a78c338b84abd20503958b4cc83a2819c1b7f66b853",
	                    "EndpointID": "56f4dbc0bbb8f47d5c993f7eca165024acacbfd4c03bcafcfbf8bd367baeecec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-544242",
	                        "1012722eb9fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-544242 -n functional-544242
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-544242 logs -n 25: (1.44561856s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-544242 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:22 UTC │ 13 Oct 25 22:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 22:22 UTC │ 13 Oct 25 22:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 22:22 UTC │ 13 Oct 25 22:22 UTC │
	│ kubectl │ functional-544242 kubectl -- --context functional-544242 get pods                                                          │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:22 UTC │ 13 Oct 25 22:22 UTC │
	│ start   │ -p functional-544242 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:22 UTC │ 13 Oct 25 22:22 UTC │
	│ service │ invalid-svc -p functional-544242                                                                                           │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:22 UTC │                     │
	│ cp      │ functional-544242 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ config  │ functional-544242 config unset cpus                                                                                        │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ config  │ functional-544242 config get cpus                                                                                          │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │                     │
	│ config  │ functional-544242 config set cpus 2                                                                                        │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ config  │ functional-544242 config get cpus                                                                                          │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ config  │ functional-544242 config unset cpus                                                                                        │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ ssh     │ functional-544242 ssh -n functional-544242 sudo cat /home/docker/cp-test.txt                                               │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ config  │ functional-544242 config get cpus                                                                                          │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │                     │
	│ ssh     │ functional-544242 ssh echo hello                                                                                           │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ cp      │ functional-544242 cp functional-544242:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2928095060/001/cp-test.txt │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ ssh     │ functional-544242 ssh cat /etc/hostname                                                                                    │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ ssh     │ functional-544242 ssh -n functional-544242 sudo cat /home/docker/cp-test.txt                                               │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ tunnel  │ functional-544242 tunnel --alsologtostderr                                                                                 │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │                     │
	│ tunnel  │ functional-544242 tunnel --alsologtostderr                                                                                 │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │                     │
	│ cp      │ functional-544242 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ tunnel  │ functional-544242 tunnel --alsologtostderr                                                                                 │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │                     │
	│ ssh     │ functional-544242 ssh -n functional-544242 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ addons  │ functional-544242 addons list                                                                                              │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ addons  │ functional-544242 addons list -o json                                                                                      │ functional-544242 │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:22:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:22:13.320764  450351 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:22:13.320877  450351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:22:13.320881  450351 out.go:374] Setting ErrFile to fd 2...
	I1013 22:22:13.320884  450351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:22:13.321146  450351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:22:13.321569  450351 out.go:368] Setting JSON to false
	I1013 22:22:13.322646  450351 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":7470,"bootTime":1760386664,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 22:22:13.322705  450351 start.go:141] virtualization:  
	I1013 22:22:13.326393  450351 out.go:179] * [functional-544242] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:22:13.329619  450351 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:22:13.329686  450351 notify.go:220] Checking for updates...
	I1013 22:22:13.337276  450351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:22:13.340252  450351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 22:22:13.343318  450351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 22:22:13.346311  450351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:22:13.349329  450351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:22:13.352987  450351 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:22:13.353124  450351 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:22:13.375705  450351 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:22:13.375831  450351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:22:13.451714  450351 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-13 22:22:13.441660346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:22:13.451825  450351 docker.go:318] overlay module found
	I1013 22:22:13.455038  450351 out.go:179] * Using the docker driver based on existing profile
	I1013 22:22:13.457905  450351 start.go:305] selected driver: docker
	I1013 22:22:13.457915  450351 start.go:925] validating driver "docker" against &{Name:functional-544242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-544242 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:22:13.458011  450351 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:22:13.458113  450351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:22:13.533059  450351 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-13 22:22:13.516903848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:22:13.533661  450351 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:22:13.533689  450351 cni.go:84] Creating CNI manager for ""
	I1013 22:22:13.533757  450351 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:22:13.533890  450351 start.go:349] cluster config:
	{Name:functional-544242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-544242 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:22:13.537247  450351 out.go:179] * Starting "functional-544242" primary control-plane node in "functional-544242" cluster
	I1013 22:22:13.540141  450351 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:22:13.543045  450351 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:22:13.545844  450351 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:22:13.545898  450351 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:22:13.545907  450351 cache.go:58] Caching tarball of preloaded images
	I1013 22:22:13.545994  450351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:22:13.546000  450351 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:22:13.546011  450351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:22:13.546130  450351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/config.json ...
	I1013 22:22:13.581218  450351 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 22:22:13.581231  450351 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 22:22:13.581259  450351 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:22:13.581281  450351 start.go:360] acquireMachinesLock for functional-544242: {Name:mk78eb2133ab84a878bf48dbfb8f80a7c2b5150c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:22:13.581350  450351 start.go:364] duration metric: took 52.233µs to acquireMachinesLock for "functional-544242"
	I1013 22:22:13.581370  450351 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:22:13.581380  450351 fix.go:54] fixHost starting: 
	I1013 22:22:13.581683  450351 cli_runner.go:164] Run: docker container inspect functional-544242 --format={{.State.Status}}
	I1013 22:22:13.603650  450351 fix.go:112] recreateIfNeeded on functional-544242: state=Running err=<nil>
	W1013 22:22:13.603686  450351 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 22:22:13.606926  450351 out.go:252] * Updating the running docker "functional-544242" container ...
	I1013 22:22:13.606954  450351 machine.go:93] provisionDockerMachine start ...
	I1013 22:22:13.607062  450351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:22:13.625818  450351 main.go:141] libmachine: Using SSH client type: native
	I1013 22:22:13.626131  450351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1013 22:22:13.626137  450351 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:22:13.770740  450351 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-544242
	
	I1013 22:22:13.770753  450351 ubuntu.go:182] provisioning hostname "functional-544242"
	I1013 22:22:13.770822  450351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:22:13.788806  450351 main.go:141] libmachine: Using SSH client type: native
	I1013 22:22:13.789111  450351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1013 22:22:13.789120  450351 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-544242 && echo "functional-544242" | sudo tee /etc/hostname
	I1013 22:22:13.944677  450351 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-544242
	
	I1013 22:22:13.944776  450351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:22:13.964105  450351 main.go:141] libmachine: Using SSH client type: native
	I1013 22:22:13.964398  450351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1013 22:22:13.964412  450351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-544242' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-544242/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-544242' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:22:14.115752  450351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:22:14.115769  450351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 22:22:14.115796  450351 ubuntu.go:190] setting up certificates
	I1013 22:22:14.115804  450351 provision.go:84] configureAuth start
	I1013 22:22:14.115862  450351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-544242
	I1013 22:22:14.135758  450351 provision.go:143] copyHostCerts
	I1013 22:22:14.135816  450351 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 22:22:14.135832  450351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 22:22:14.135907  450351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 22:22:14.136014  450351 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 22:22:14.136018  450351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 22:22:14.136045  450351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 22:22:14.136100  450351 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 22:22:14.136103  450351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 22:22:14.136126  450351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 22:22:14.136168  450351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.functional-544242 san=[127.0.0.1 192.168.49.2 functional-544242 localhost minikube]
	I1013 22:22:14.452283  450351 provision.go:177] copyRemoteCerts
	I1013 22:22:14.452346  450351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:22:14.452393  450351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:22:14.470938  450351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
	I1013 22:22:14.576778  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:22:14.594727  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:22:14.613006  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:22:14.630783  450351 provision.go:87] duration metric: took 514.96611ms to configureAuth
	I1013 22:22:14.630800  450351 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:22:14.631001  450351 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:22:14.631136  450351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:22:14.648527  450351 main.go:141] libmachine: Using SSH client type: native
	I1013 22:22:14.648846  450351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1013 22:22:14.648859  450351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:22:20.034480  450351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:22:20.034494  450351 machine.go:96] duration metric: took 6.427532909s to provisionDockerMachine
	I1013 22:22:20.034505  450351 start.go:293] postStartSetup for "functional-544242" (driver="docker")
	I1013 22:22:20.034516  450351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:22:20.034600  450351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:22:20.034648  450351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:22:20.053978  450351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
	I1013 22:22:20.159396  450351 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:22:20.162962  450351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:22:20.162982  450351 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:22:20.162992  450351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 22:22:20.163056  450351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 22:22:20.163156  450351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 22:22:20.163324  450351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/test/nested/copy/430652/hosts -> hosts in /etc/test/nested/copy/430652
	I1013 22:22:20.163377  450351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/430652
	I1013 22:22:20.171349  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 22:22:20.190431  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/test/nested/copy/430652/hosts --> /etc/test/nested/copy/430652/hosts (40 bytes)
	I1013 22:22:20.209072  450351 start.go:296] duration metric: took 174.551643ms for postStartSetup
	I1013 22:22:20.209158  450351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:22:20.209196  450351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:22:20.226541  450351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
	I1013 22:22:20.328626  450351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:22:20.333692  450351 fix.go:56] duration metric: took 6.752309508s for fixHost
	I1013 22:22:20.333707  450351 start.go:83] releasing machines lock for "functional-544242", held for 6.752350082s
	I1013 22:22:20.333775  450351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-544242
	I1013 22:22:20.350508  450351 ssh_runner.go:195] Run: cat /version.json
	I1013 22:22:20.350555  450351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:22:20.350821  450351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:22:20.350873  450351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:22:20.369997  450351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
	I1013 22:22:20.385100  450351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
	I1013 22:22:20.470899  450351 ssh_runner.go:195] Run: systemctl --version
	I1013 22:22:20.564298  450351 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:22:20.601828  450351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:22:20.606401  450351 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:22:20.606470  450351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:22:20.614753  450351 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 22:22:20.614767  450351 start.go:495] detecting cgroup driver to use...
	I1013 22:22:20.614812  450351 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:22:20.614862  450351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:22:20.630987  450351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:22:20.644454  450351 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:22:20.644510  450351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:22:20.662333  450351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:22:20.676444  450351 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:22:20.818204  450351 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:22:20.967250  450351 docker.go:234] disabling docker service ...
	I1013 22:22:20.967324  450351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:22:20.984330  450351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:22:20.997815  450351 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:22:21.168489  450351 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:22:21.322071  450351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:22:21.336883  450351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:22:21.352745  450351 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:22:21.352806  450351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:22:21.361977  450351 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:22:21.362037  450351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:22:21.371651  450351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:22:21.381339  450351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:22:21.390774  450351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:22:21.399047  450351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:22:21.408522  450351 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:22:21.417146  450351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:22:21.426108  450351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:22:21.433816  450351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:22:21.441314  450351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:22:21.586606  450351 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:22:29.524800  450351 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.93816761s)
	I1013 22:22:29.524820  450351 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:22:29.524889  450351 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:22:29.529123  450351 start.go:563] Will wait 60s for crictl version
	I1013 22:22:29.529176  450351 ssh_runner.go:195] Run: which crictl
	I1013 22:22:29.532870  450351 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:22:29.557655  450351 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:22:29.557736  450351 ssh_runner.go:195] Run: crio --version
	I1013 22:22:29.585986  450351 ssh_runner.go:195] Run: crio --version
	I1013 22:22:29.618606  450351 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:22:29.621602  450351 cli_runner.go:164] Run: docker network inspect functional-544242 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:22:29.637442  450351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1013 22:22:29.644884  450351 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1013 22:22:29.647834  450351 kubeadm.go:883] updating cluster {Name:functional-544242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-544242 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:22:29.647990  450351 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:22:29.648063  450351 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:22:29.685606  450351 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:22:29.685618  450351 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:22:29.685670  450351 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:22:29.713033  450351 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:22:29.713045  450351 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:22:29.713052  450351 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1013 22:22:29.713151  450351 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-544242 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-544242 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:22:29.713227  450351 ssh_runner.go:195] Run: crio config
	I1013 22:22:29.785692  450351 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1013 22:22:29.785719  450351 cni.go:84] Creating CNI manager for ""
	I1013 22:22:29.785728  450351 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:22:29.785743  450351 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:22:29.785766  450351 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-544242 NodeName:functional-544242 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:22:29.785913  450351 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-544242"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:22:29.785989  450351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:22:29.794170  450351 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:22:29.794229  450351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:22:29.802116  450351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:22:29.816225  450351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:22:29.829799  450351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1013 22:22:29.842888  450351 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:22:29.846781  450351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:22:29.986427  450351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:22:30.000107  450351 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242 for IP: 192.168.49.2
	I1013 22:22:30.000118  450351 certs.go:195] generating shared ca certs ...
	I1013 22:22:30.000143  450351 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:22:30.000281  450351 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 22:22:30.000325  450351 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 22:22:30.000331  450351 certs.go:257] generating profile certs ...
	I1013 22:22:30.000409  450351 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.key
	I1013 22:22:30.000457  450351 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/apiserver.key.5b873be0
	I1013 22:22:30.000526  450351 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/proxy-client.key
	I1013 22:22:30.000640  450351 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 22:22:30.000664  450351 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 22:22:30.000672  450351 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:22:30.000694  450351 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:22:30.000718  450351 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:22:30.000744  450351 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 22:22:30.000788  450351 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 22:22:30.001431  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:22:30.035812  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:22:30.071322  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:22:30.102949  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:22:30.122695  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:22:30.143073  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:22:30.162008  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:22:30.180549  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:22:30.199857  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 22:22:30.218692  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 22:22:30.237535  450351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:22:30.256477  450351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:22:30.270160  450351 ssh_runner.go:195] Run: openssl version
	I1013 22:22:30.276787  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:22:30.285564  450351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:22:30.289466  450351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:22:30.289521  450351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:22:30.330863  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:22:30.339248  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 22:22:30.347965  450351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 22:22:30.351865  450351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 22:22:30.351945  450351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 22:22:30.393415  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 22:22:30.402153  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 22:22:30.410664  450351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 22:22:30.414357  450351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 22:22:30.414412  450351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 22:22:30.455757  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:22:30.463683  450351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:22:30.467376  450351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:22:30.511954  450351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:22:30.552826  450351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:22:30.594728  450351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:22:30.635584  450351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:22:30.676839  450351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:22:30.717795  450351 kubeadm.go:400] StartCluster: {Name:functional-544242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-544242 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:22:30.717886  450351 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:22:30.717960  450351 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:22:30.746532  450351 cri.go:89] found id: "fe7756389e8a840aa61e637bdbba2aacbbf0738feaa7fc01ad57e37e60c3564a"
	I1013 22:22:30.746543  450351 cri.go:89] found id: "f8069a3687d15a066b6c3c33c8a4d4ac621315aaf15df85f9f9ce96f25f56363"
	I1013 22:22:30.746546  450351 cri.go:89] found id: "e3e8c2298b56ec80c9ca93adc51c400869cfa9de1456a438291ff114d3828073"
	I1013 22:22:30.746549  450351 cri.go:89] found id: "a5e8f28e3b86b1e3014c7b8ebb9251fdcc64558359f46ea9768bf8428d6733cb"
	I1013 22:22:30.746552  450351 cri.go:89] found id: "995363469e07d5c97dbe67abc2b9f975396843596dfcdd71445d9e9068eed285"
	I1013 22:22:30.746555  450351 cri.go:89] found id: "1daa05c335e205802919cfe6c36cf61da6815343a06469e1ebce08f81d343beb"
	I1013 22:22:30.746557  450351 cri.go:89] found id: "d4168ec34011c29cf36041daf5a90eb62ccfa440ca03ed8e7133fa85c01e315b"
	I1013 22:22:30.746559  450351 cri.go:89] found id: "87d59afcd7424cc47f97872978ced47ead3418c79dc9694f5de4f943be1019e2"
	I1013 22:22:30.746562  450351 cri.go:89] found id: "ee7f3a87a3859cc2ad7d1b026514cec1ec0a852befa879412c00279fb610ed88"
	I1013 22:22:30.746569  450351 cri.go:89] found id: "6b3384e688bbf02eb05e45dfdf984347bbc5c31fcd8c0b8afa43be7dcbc37674"
	I1013 22:22:30.746571  450351 cri.go:89] found id: "37c6c41660f916479866f31d698aca14a21754211077effac256926d74cc532c"
	I1013 22:22:30.746577  450351 cri.go:89] found id: "2d295e53780dfc3f62ce9583656c2002628d20410b7f54a510d67b1247629694"
	I1013 22:22:30.746580  450351 cri.go:89] found id: "dc6749b60cc9dc9c39466dc55a717f0e68ce17a5608460c4da3a44a3972e7e16"
	I1013 22:22:30.746582  450351 cri.go:89] found id: "a69b6d03cb76b52595ac2542aff07255c861de3aa03fa6788c67e86035d45089"
	I1013 22:22:30.746584  450351 cri.go:89] found id: "dd8a441286d4fb2969b284e71fb4a3f017a18e066cb013f894acc0bdcecb12a4"
	I1013 22:22:30.746588  450351 cri.go:89] found id: "e3ebcb1481a729f749bc6f92a69cfec0a0e130bd51ea208caf673bd344b6f31f"
	I1013 22:22:30.746590  450351 cri.go:89] found id: ""
	I1013 22:22:30.746642  450351 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 22:22:30.757721  450351 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:22:30Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:22:30.757794  450351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:22:30.765683  450351 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 22:22:30.765692  450351 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 22:22:30.765742  450351 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 22:22:30.773072  450351 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:22:30.773624  450351 kubeconfig.go:125] found "functional-544242" server: "https://192.168.49.2:8441"
	I1013 22:22:30.774954  450351 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 22:22:30.782804  450351 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-13 22:20:30.847006405 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-13 22:22:29.836788395 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1013 22:22:30.782813  450351 kubeadm.go:1160] stopping kube-system containers ...
	I1013 22:22:30.782824  450351 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1013 22:22:30.782882  450351 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:22:30.812312  450351 cri.go:89] found id: "fe7756389e8a840aa61e637bdbba2aacbbf0738feaa7fc01ad57e37e60c3564a"
	I1013 22:22:30.812323  450351 cri.go:89] found id: "f8069a3687d15a066b6c3c33c8a4d4ac621315aaf15df85f9f9ce96f25f56363"
	I1013 22:22:30.812326  450351 cri.go:89] found id: "e3e8c2298b56ec80c9ca93adc51c400869cfa9de1456a438291ff114d3828073"
	I1013 22:22:30.812329  450351 cri.go:89] found id: "a5e8f28e3b86b1e3014c7b8ebb9251fdcc64558359f46ea9768bf8428d6733cb"
	I1013 22:22:30.812332  450351 cri.go:89] found id: "995363469e07d5c97dbe67abc2b9f975396843596dfcdd71445d9e9068eed285"
	I1013 22:22:30.812334  450351 cri.go:89] found id: "1daa05c335e205802919cfe6c36cf61da6815343a06469e1ebce08f81d343beb"
	I1013 22:22:30.812337  450351 cri.go:89] found id: "d4168ec34011c29cf36041daf5a90eb62ccfa440ca03ed8e7133fa85c01e315b"
	I1013 22:22:30.812339  450351 cri.go:89] found id: "87d59afcd7424cc47f97872978ced47ead3418c79dc9694f5de4f943be1019e2"
	I1013 22:22:30.812342  450351 cri.go:89] found id: "ee7f3a87a3859cc2ad7d1b026514cec1ec0a852befa879412c00279fb610ed88"
	I1013 22:22:30.812347  450351 cri.go:89] found id: "6b3384e688bbf02eb05e45dfdf984347bbc5c31fcd8c0b8afa43be7dcbc37674"
	I1013 22:22:30.812359  450351 cri.go:89] found id: "37c6c41660f916479866f31d698aca14a21754211077effac256926d74cc532c"
	I1013 22:22:30.812361  450351 cri.go:89] found id: "2d295e53780dfc3f62ce9583656c2002628d20410b7f54a510d67b1247629694"
	I1013 22:22:30.812363  450351 cri.go:89] found id: "dc6749b60cc9dc9c39466dc55a717f0e68ce17a5608460c4da3a44a3972e7e16"
	I1013 22:22:30.812365  450351 cri.go:89] found id: "a69b6d03cb76b52595ac2542aff07255c861de3aa03fa6788c67e86035d45089"
	I1013 22:22:30.812367  450351 cri.go:89] found id: "dd8a441286d4fb2969b284e71fb4a3f017a18e066cb013f894acc0bdcecb12a4"
	I1013 22:22:30.812373  450351 cri.go:89] found id: "e3ebcb1481a729f749bc6f92a69cfec0a0e130bd51ea208caf673bd344b6f31f"
	I1013 22:22:30.812375  450351 cri.go:89] found id: ""
	I1013 22:22:30.812380  450351 cri.go:252] Stopping containers: [fe7756389e8a840aa61e637bdbba2aacbbf0738feaa7fc01ad57e37e60c3564a f8069a3687d15a066b6c3c33c8a4d4ac621315aaf15df85f9f9ce96f25f56363 e3e8c2298b56ec80c9ca93adc51c400869cfa9de1456a438291ff114d3828073 a5e8f28e3b86b1e3014c7b8ebb9251fdcc64558359f46ea9768bf8428d6733cb 995363469e07d5c97dbe67abc2b9f975396843596dfcdd71445d9e9068eed285 1daa05c335e205802919cfe6c36cf61da6815343a06469e1ebce08f81d343beb d4168ec34011c29cf36041daf5a90eb62ccfa440ca03ed8e7133fa85c01e315b 87d59afcd7424cc47f97872978ced47ead3418c79dc9694f5de4f943be1019e2 ee7f3a87a3859cc2ad7d1b026514cec1ec0a852befa879412c00279fb610ed88 6b3384e688bbf02eb05e45dfdf984347bbc5c31fcd8c0b8afa43be7dcbc37674 37c6c41660f916479866f31d698aca14a21754211077effac256926d74cc532c 2d295e53780dfc3f62ce9583656c2002628d20410b7f54a510d67b1247629694 dc6749b60cc9dc9c39466dc55a717f0e68ce17a5608460c4da3a44a3972e7e16 a69b6d03cb76b52595ac2542aff07255c861de3aa03fa6788c67e86035d45089 dd8a441286d4fb2969b284e71fb4a3f017a18e066
cb013f894acc0bdcecb12a4 e3ebcb1481a729f749bc6f92a69cfec0a0e130bd51ea208caf673bd344b6f31f]
	I1013 22:22:30.812435  450351 ssh_runner.go:195] Run: which crictl
	I1013 22:22:30.816359  450351 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 fe7756389e8a840aa61e637bdbba2aacbbf0738feaa7fc01ad57e37e60c3564a f8069a3687d15a066b6c3c33c8a4d4ac621315aaf15df85f9f9ce96f25f56363 e3e8c2298b56ec80c9ca93adc51c400869cfa9de1456a438291ff114d3828073 a5e8f28e3b86b1e3014c7b8ebb9251fdcc64558359f46ea9768bf8428d6733cb 995363469e07d5c97dbe67abc2b9f975396843596dfcdd71445d9e9068eed285 1daa05c335e205802919cfe6c36cf61da6815343a06469e1ebce08f81d343beb d4168ec34011c29cf36041daf5a90eb62ccfa440ca03ed8e7133fa85c01e315b 87d59afcd7424cc47f97872978ced47ead3418c79dc9694f5de4f943be1019e2 ee7f3a87a3859cc2ad7d1b026514cec1ec0a852befa879412c00279fb610ed88 6b3384e688bbf02eb05e45dfdf984347bbc5c31fcd8c0b8afa43be7dcbc37674 37c6c41660f916479866f31d698aca14a21754211077effac256926d74cc532c 2d295e53780dfc3f62ce9583656c2002628d20410b7f54a510d67b1247629694 dc6749b60cc9dc9c39466dc55a717f0e68ce17a5608460c4da3a44a3972e7e16 a69b6d03cb76b52595ac2542aff07255c861de3aa03fa6788c67e86035d45089 dd8a44
1286d4fb2969b284e71fb4a3f017a18e066cb013f894acc0bdcecb12a4 e3ebcb1481a729f749bc6f92a69cfec0a0e130bd51ea208caf673bd344b6f31f
	I1013 22:22:30.919282  450351 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1013 22:22:31.029979  450351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:22:31.038045  450351 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 13 22:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct 13 22:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 13 22:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 13 22:20 /etc/kubernetes/scheduler.conf
	
	I1013 22:22:31.038101  450351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1013 22:22:31.046247  450351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1013 22:22:31.054273  450351 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:22:31.054339  450351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:22:31.062160  450351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1013 22:22:31.070297  450351 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:22:31.070356  450351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:22:31.078278  450351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1013 22:22:31.086690  450351 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:22:31.086746  450351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:22:31.095530  450351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:22:31.104234  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:22:31.154027  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:22:32.773136  450351 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.619086581s)
	I1013 22:22:32.773192  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:22:32.994082  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:22:33.064984  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:22:33.132004  450351 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:22:33.132077  450351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:22:33.633091  450351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:22:34.132956  450351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:22:34.148384  450351 api_server.go:72] duration metric: took 1.016388931s to wait for apiserver process to appear ...
	I1013 22:22:34.148398  450351 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:22:34.148416  450351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 22:22:37.134649  450351 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 22:22:37.134666  450351 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 22:22:37.134678  450351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 22:22:37.261852  450351 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 22:22:37.261868  450351 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 22:22:37.261880  450351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 22:22:37.369772  450351 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:22:37.369791  450351 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:22:37.649199  450351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 22:22:37.659899  450351 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:22:37.659926  450351 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:22:38.148516  450351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 22:22:38.163453  450351 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:22:38.163476  450351 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:22:38.649082  450351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 22:22:38.657171  450351 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1013 22:22:38.671480  450351 api_server.go:141] control plane version: v1.34.1
	I1013 22:22:38.671498  450351 api_server.go:131] duration metric: took 4.523093965s to wait for apiserver health ...
	I1013 22:22:38.671506  450351 cni.go:84] Creating CNI manager for ""
	I1013 22:22:38.671514  450351 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:22:38.675203  450351 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 22:22:38.678259  450351 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:22:38.682661  450351 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:22:38.682673  450351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:22:38.698547  450351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:22:39.292720  450351 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:22:39.308329  450351 system_pods.go:59] 8 kube-system pods found
	I1013 22:22:39.308355  450351 system_pods.go:61] "coredns-66bc5c9577-9npmn" [ce796c37-a134-4672-b1ca-4a29fcea7ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:22:39.308362  450351 system_pods.go:61] "etcd-functional-544242" [f2006e87-ee88-46bb-870f-e552dd3c0c0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:22:39.308368  450351 system_pods.go:61] "kindnet-rmpd5" [046b2dab-50d4-44ff-9236-bfb436339529] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1013 22:22:39.308376  450351 system_pods.go:61] "kube-apiserver-functional-544242" [c2a36801-16ad-491b-addd-07247a68acc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:22:39.308382  450351 system_pods.go:61] "kube-controller-manager-functional-544242" [782b85f0-1ec7-4d76-923a-8cfb9cdf9f0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:22:39.308387  450351 system_pods.go:61] "kube-proxy-p4h5z" [7cca7baf-6913-43c8-9ff2-210816643486] Running
	I1013 22:22:39.308392  450351 system_pods.go:61] "kube-scheduler-functional-544242" [088a9f2d-b2e7-4226-863a-c239c78c25b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:22:39.308396  450351 system_pods.go:61] "storage-provisioner" [ef036d96-835e-4a32-bd45-950819b494e4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:22:39.308401  450351 system_pods.go:74] duration metric: took 15.671004ms to wait for pod list to return data ...
	I1013 22:22:39.308408  450351 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:22:39.312398  450351 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:22:39.312417  450351 node_conditions.go:123] node cpu capacity is 2
	I1013 22:22:39.312428  450351 node_conditions.go:105] duration metric: took 4.015557ms to run NodePressure ...
	I1013 22:22:39.312488  450351 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:22:39.577680  450351 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1013 22:22:39.581400  450351 kubeadm.go:743] kubelet initialised
	I1013 22:22:39.581410  450351 kubeadm.go:744] duration metric: took 3.717527ms waiting for restarted kubelet to initialise ...
	I1013 22:22:39.581424  450351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:22:39.590608  450351 ops.go:34] apiserver oom_adj: -16
	I1013 22:22:39.590619  450351 kubeadm.go:601] duration metric: took 8.82492187s to restartPrimaryControlPlane
	I1013 22:22:39.590627  450351 kubeadm.go:402] duration metric: took 8.872841458s to StartCluster
	I1013 22:22:39.590641  450351 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:22:39.590701  450351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 22:22:39.591352  450351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:22:39.591585  450351 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:22:39.591839  450351 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:22:39.591895  450351 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:22:39.591976  450351 addons.go:69] Setting storage-provisioner=true in profile "functional-544242"
	I1013 22:22:39.591998  450351 addons.go:238] Setting addon storage-provisioner=true in "functional-544242"
	W1013 22:22:39.592003  450351 addons.go:247] addon storage-provisioner should already be in state true
	I1013 22:22:39.592024  450351 host.go:66] Checking if "functional-544242" exists ...
	I1013 22:22:39.592042  450351 addons.go:69] Setting default-storageclass=true in profile "functional-544242"
	I1013 22:22:39.592054  450351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-544242"
	I1013 22:22:39.592366  450351 cli_runner.go:164] Run: docker container inspect functional-544242 --format={{.State.Status}}
	I1013 22:22:39.592433  450351 cli_runner.go:164] Run: docker container inspect functional-544242 --format={{.State.Status}}
	I1013 22:22:39.597278  450351 out.go:179] * Verifying Kubernetes components...
	I1013 22:22:39.600104  450351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:22:39.618287  450351 addons.go:238] Setting addon default-storageclass=true in "functional-544242"
	W1013 22:22:39.618298  450351 addons.go:247] addon default-storageclass should already be in state true
	I1013 22:22:39.618321  450351 host.go:66] Checking if "functional-544242" exists ...
	I1013 22:22:39.618723  450351 cli_runner.go:164] Run: docker container inspect functional-544242 --format={{.State.Status}}
	I1013 22:22:39.626316  450351 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:22:39.629271  450351 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:22:39.629283  450351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:22:39.629358  450351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:22:39.642908  450351 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:22:39.642920  450351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:22:39.642980  450351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:22:39.663300  450351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
	I1013 22:22:39.678455  450351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
	I1013 22:22:39.813550  450351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:22:39.827795  450351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:22:39.847374  450351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:22:40.724941  450351 node_ready.go:35] waiting up to 6m0s for node "functional-544242" to be "Ready" ...
	I1013 22:22:40.727687  450351 node_ready.go:49] node "functional-544242" is "Ready"
	I1013 22:22:40.727701  450351 node_ready.go:38] duration metric: took 2.743408ms for node "functional-544242" to be "Ready" ...
	I1013 22:22:40.727711  450351 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:22:40.727768  450351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:22:40.735391  450351 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 22:22:40.738253  450351 addons.go:514] duration metric: took 1.146345405s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 22:22:40.741400  450351 api_server.go:72] duration metric: took 1.149791126s to wait for apiserver process to appear ...
	I1013 22:22:40.741413  450351 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:22:40.741430  450351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 22:22:40.750772  450351 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1013 22:22:40.751821  450351 api_server.go:141] control plane version: v1.34.1
	I1013 22:22:40.751834  450351 api_server.go:131] duration metric: took 10.416012ms to wait for apiserver health ...
	I1013 22:22:40.751842  450351 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:22:40.755431  450351 system_pods.go:59] 8 kube-system pods found
	I1013 22:22:40.755446  450351 system_pods.go:61] "coredns-66bc5c9577-9npmn" [ce796c37-a134-4672-b1ca-4a29fcea7ec4] Running
	I1013 22:22:40.755454  450351 system_pods.go:61] "etcd-functional-544242" [f2006e87-ee88-46bb-870f-e552dd3c0c0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:22:40.755459  450351 system_pods.go:61] "kindnet-rmpd5" [046b2dab-50d4-44ff-9236-bfb436339529] Running
	I1013 22:22:40.755465  450351 system_pods.go:61] "kube-apiserver-functional-544242" [c2a36801-16ad-491b-addd-07247a68acc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:22:40.755472  450351 system_pods.go:61] "kube-controller-manager-functional-544242" [782b85f0-1ec7-4d76-923a-8cfb9cdf9f0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:22:40.755476  450351 system_pods.go:61] "kube-proxy-p4h5z" [7cca7baf-6913-43c8-9ff2-210816643486] Running
	I1013 22:22:40.755481  450351 system_pods.go:61] "kube-scheduler-functional-544242" [088a9f2d-b2e7-4226-863a-c239c78c25b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:22:40.755484  450351 system_pods.go:61] "storage-provisioner" [ef036d96-835e-4a32-bd45-950819b494e4] Running
	I1013 22:22:40.755489  450351 system_pods.go:74] duration metric: took 3.641877ms to wait for pod list to return data ...
	I1013 22:22:40.755495  450351 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:22:40.757808  450351 default_sa.go:45] found service account: "default"
	I1013 22:22:40.757821  450351 default_sa.go:55] duration metric: took 2.32136ms for default service account to be created ...
	I1013 22:22:40.757829  450351 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:22:40.760881  450351 system_pods.go:86] 8 kube-system pods found
	I1013 22:22:40.760895  450351 system_pods.go:89] "coredns-66bc5c9577-9npmn" [ce796c37-a134-4672-b1ca-4a29fcea7ec4] Running
	I1013 22:22:40.760903  450351 system_pods.go:89] "etcd-functional-544242" [f2006e87-ee88-46bb-870f-e552dd3c0c0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:22:40.760907  450351 system_pods.go:89] "kindnet-rmpd5" [046b2dab-50d4-44ff-9236-bfb436339529] Running
	I1013 22:22:40.760913  450351 system_pods.go:89] "kube-apiserver-functional-544242" [c2a36801-16ad-491b-addd-07247a68acc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:22:40.760918  450351 system_pods.go:89] "kube-controller-manager-functional-544242" [782b85f0-1ec7-4d76-923a-8cfb9cdf9f0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:22:40.760922  450351 system_pods.go:89] "kube-proxy-p4h5z" [7cca7baf-6913-43c8-9ff2-210816643486] Running
	I1013 22:22:40.760927  450351 system_pods.go:89] "kube-scheduler-functional-544242" [088a9f2d-b2e7-4226-863a-c239c78c25b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:22:40.760930  450351 system_pods.go:89] "storage-provisioner" [ef036d96-835e-4a32-bd45-950819b494e4] Running
	I1013 22:22:40.760936  450351 system_pods.go:126] duration metric: took 3.102335ms to wait for k8s-apps to be running ...
	I1013 22:22:40.760947  450351 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:22:40.761004  450351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:22:40.773782  450351 system_svc.go:56] duration metric: took 12.829933ms WaitForService to wait for kubelet
	I1013 22:22:40.773799  450351 kubeadm.go:586] duration metric: took 1.182193915s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:22:40.773815  450351 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:22:40.777020  450351 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:22:40.777034  450351 node_conditions.go:123] node cpu capacity is 2
	I1013 22:22:40.777044  450351 node_conditions.go:105] duration metric: took 3.22403ms to run NodePressure ...
	I1013 22:22:40.777055  450351 start.go:241] waiting for startup goroutines ...
	I1013 22:22:40.777062  450351 start.go:246] waiting for cluster config update ...
	I1013 22:22:40.777071  450351 start.go:255] writing updated cluster config ...
	I1013 22:22:40.777354  450351 ssh_runner.go:195] Run: rm -f paused
	I1013 22:22:40.780975  450351 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:22:40.784543  450351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9npmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:22:40.789803  450351 pod_ready.go:94] pod "coredns-66bc5c9577-9npmn" is "Ready"
	I1013 22:22:40.789817  450351 pod_ready.go:86] duration metric: took 5.260038ms for pod "coredns-66bc5c9577-9npmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:22:40.792378  450351 pod_ready.go:83] waiting for pod "etcd-functional-544242" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 22:22:42.798656  450351 pod_ready.go:104] pod "etcd-functional-544242" is not "Ready", error: <nil>
	W1013 22:22:45.305493  450351 pod_ready.go:104] pod "etcd-functional-544242" is not "Ready", error: <nil>
	W1013 22:22:47.798180  450351 pod_ready.go:104] pod "etcd-functional-544242" is not "Ready", error: <nil>
	I1013 22:22:49.798508  450351 pod_ready.go:94] pod "etcd-functional-544242" is "Ready"
	I1013 22:22:49.798524  450351 pod_ready.go:86] duration metric: took 9.006134s for pod "etcd-functional-544242" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:22:49.801241  450351 pod_ready.go:83] waiting for pod "kube-apiserver-functional-544242" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:22:49.806010  450351 pod_ready.go:94] pod "kube-apiserver-functional-544242" is "Ready"
	I1013 22:22:49.806023  450351 pod_ready.go:86] duration metric: took 4.769742ms for pod "kube-apiserver-functional-544242" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:22:49.808654  450351 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-544242" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:22:49.812976  450351 pod_ready.go:94] pod "kube-controller-manager-functional-544242" is "Ready"
	I1013 22:22:49.812990  450351 pod_ready.go:86] duration metric: took 4.322365ms for pod "kube-controller-manager-functional-544242" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:22:49.815055  450351 pod_ready.go:83] waiting for pod "kube-proxy-p4h5z" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:22:49.996378  450351 pod_ready.go:94] pod "kube-proxy-p4h5z" is "Ready"
	I1013 22:22:49.996392  450351 pod_ready.go:86] duration metric: took 181.325673ms for pod "kube-proxy-p4h5z" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:22:50.197146  450351 pod_ready.go:83] waiting for pod "kube-scheduler-functional-544242" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 22:22:52.204715  450351 pod_ready.go:104] pod "kube-scheduler-functional-544242" is not "Ready", error: <nil>
	I1013 22:22:53.203167  450351 pod_ready.go:94] pod "kube-scheduler-functional-544242" is "Ready"
	I1013 22:22:53.203182  450351 pod_ready.go:86] duration metric: took 3.006024401s for pod "kube-scheduler-functional-544242" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:22:53.203192  450351 pod_ready.go:40] duration metric: took 12.422196891s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:22:53.258357  450351 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:22:53.261324  450351 out.go:179] * Done! kubectl is now configured to use "functional-544242" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 22:23:27 functional-544242 crio[3524]: time="2025-10-13T22:23:27.22119226Z" level=info msg="Checking pod default_hello-node-75c85bcc94-chkxw for CNI network kindnet (type=ptp)"
	Oct 13 22:23:27 functional-544242 crio[3524]: time="2025-10-13T22:23:27.224166769Z" level=info msg="Ran pod sandbox fe141c1a4d13a4d5b18c01d7fe5234c4a543c8fe30a0df75c1851d2d615b41e6 with infra container: default/hello-node-75c85bcc94-chkxw/POD" id=762f05be-a22b-4ee9-b4d9-bf2a0d03119b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:23:27 functional-544242 crio[3524]: time="2025-10-13T22:23:27.22879363Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=23724a25-d1e8-436c-a96c-de1ff988f6d0 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:23:29 functional-544242 crio[3524]: time="2025-10-13T22:23:29.138104734Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c398cb25-9c5c-4509-b736-82b50f1c4345 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.300539205Z" level=info msg="Stopping pod sandbox: 0e7c1bff290c6b5661c4ccb5b125f9b641d9ea9c5ee00b18ff13c3492dad41b5" id=98cdac41-e09b-48ef-b0b7-b61c6aa90a1f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.300631052Z" level=info msg="Stopped pod sandbox (already stopped): 0e7c1bff290c6b5661c4ccb5b125f9b641d9ea9c5ee00b18ff13c3492dad41b5" id=98cdac41-e09b-48ef-b0b7-b61c6aa90a1f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.302269126Z" level=info msg="Removing pod sandbox: 0e7c1bff290c6b5661c4ccb5b125f9b641d9ea9c5ee00b18ff13c3492dad41b5" id=5a7286b8-895c-433d-9d93-ae78e6864f66 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.306424496Z" level=info msg="Removed pod sandbox: 0e7c1bff290c6b5661c4ccb5b125f9b641d9ea9c5ee00b18ff13c3492dad41b5" id=5a7286b8-895c-433d-9d93-ae78e6864f66 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.30729401Z" level=info msg="Stopping pod sandbox: e51d0941c8ca023fafea1345aaed152893877e2cf8da3518c5c0532fe1561a9f" id=0605c542-4d0c-44b0-9f53-4ce818eb1f8c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.307370513Z" level=info msg="Stopped pod sandbox (already stopped): e51d0941c8ca023fafea1345aaed152893877e2cf8da3518c5c0532fe1561a9f" id=0605c542-4d0c-44b0-9f53-4ce818eb1f8c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.307923865Z" level=info msg="Removing pod sandbox: e51d0941c8ca023fafea1345aaed152893877e2cf8da3518c5c0532fe1561a9f" id=6d9da89b-2d2c-4a2e-b4cf-9fb9d066d058 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.314073815Z" level=info msg="Removed pod sandbox: e51d0941c8ca023fafea1345aaed152893877e2cf8da3518c5c0532fe1561a9f" id=6d9da89b-2d2c-4a2e-b4cf-9fb9d066d058 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.317800426Z" level=info msg="Stopping pod sandbox: 8848a0c5152525cef8da2fa96ff577b26dd35e688f59f581cfa6663661691d1b" id=cea46e2e-8246-46ff-ae84-4f1b8875626f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.317867518Z" level=info msg="Stopped pod sandbox (already stopped): 8848a0c5152525cef8da2fa96ff577b26dd35e688f59f581cfa6663661691d1b" id=cea46e2e-8246-46ff-ae84-4f1b8875626f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.319229839Z" level=info msg="Removing pod sandbox: 8848a0c5152525cef8da2fa96ff577b26dd35e688f59f581cfa6663661691d1b" id=d8e39f61-4ddf-4e02-827c-09c7d3c3c19c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 22:23:33 functional-544242 crio[3524]: time="2025-10-13T22:23:33.32572599Z" level=info msg="Removed pod sandbox: 8848a0c5152525cef8da2fa96ff577b26dd35e688f59f581cfa6663661691d1b" id=d8e39f61-4ddf-4e02-827c-09c7d3c3c19c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 22:23:40 functional-544242 crio[3524]: time="2025-10-13T22:23:40.138391727Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bdbd3d35-1d07-45dc-a06f-7a9a18ab06a6 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:23:55 functional-544242 crio[3524]: time="2025-10-13T22:23:55.139012893Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2f5d607d-dd34-420f-80b9-f167e53d0a0e name=/runtime.v1.ImageService/PullImage
	Oct 13 22:24:05 functional-544242 crio[3524]: time="2025-10-13T22:24:05.138631845Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f87f7174-2d21-4272-bb80-203414888474 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:24:49 functional-544242 crio[3524]: time="2025-10-13T22:24:49.137727797Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ddb9ba60-a52e-411c-9284-e197b052695c name=/runtime.v1.ImageService/PullImage
	Oct 13 22:24:56 functional-544242 crio[3524]: time="2025-10-13T22:24:56.138364292Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cff859ce-4115-4a58-aa3c-1e9cf95f6c63 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:26:22 functional-544242 crio[3524]: time="2025-10-13T22:26:22.137544484Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=831b522b-e2e9-4899-a9d1-9b38d9d21cbb name=/runtime.v1.ImageService/PullImage
	Oct 13 22:26:29 functional-544242 crio[3524]: time="2025-10-13T22:26:29.137907979Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9dea2b6f-7a2c-424e-9e20-99d6dfab461a name=/runtime.v1.ImageService/PullImage
	Oct 13 22:29:09 functional-544242 crio[3524]: time="2025-10-13T22:29:09.137747419Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3ea7a226-2f46-4bf7-ade8-624191369bac name=/runtime.v1.ImageService/PullImage
	Oct 13 22:29:15 functional-544242 crio[3524]: time="2025-10-13T22:29:15.137962924Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=46c0982b-e94d-42ae-8da3-e1e397f2de3d name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ffef9371429a2       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   54a74906d390d       sp-pod                                      default
	f8f32999b3a54       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   e94d2c8eb7a80       nginx-svc                                   default
	96f3e07a7d574       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   fbefd181c7d82       kindnet-rmpd5                               kube-system
	e66c21acfba72       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   8b0931d1a9b20       kube-proxy-p4h5z                            kube-system
	7741abab0d61b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   65b9712a108e0       coredns-66bc5c9577-9npmn                    kube-system
	0a2a355fe7ffd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   ad88eec2e9c6b       storage-provisioner                         kube-system
	6b567b4f31016       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   0d1316613915a       kube-apiserver-functional-544242            kube-system
	832fbd611a153       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   836535e3223b3       kube-controller-manager-functional-544242   kube-system
	f0f45b81e7960       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   9b9d070469041       kube-scheduler-functional-544242            kube-system
	5a51573795eb5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   88b68acb75d98       etcd-functional-544242                      kube-system
	fe7756389e8a8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   836535e3223b3       kube-controller-manager-functional-544242   kube-system
	f8069a3687d15       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   9b9d070469041       kube-scheduler-functional-544242            kube-system
	e3e8c2298b56e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   65b9712a108e0       coredns-66bc5c9577-9npmn                    kube-system
	a5e8f28e3b86b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   fbefd181c7d82       kindnet-rmpd5                               kube-system
	995363469e07d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   8b0931d1a9b20       kube-proxy-p4h5z                            kube-system
	1daa05c335e20       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   ad88eec2e9c6b       storage-provisioner                         kube-system
	d4168ec34011c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   88b68acb75d98       etcd-functional-544242                      kube-system
	
	
	==> coredns [7741abab0d61b2032091dad708baf0e79a997807c83aa8844e5b893fc7f8662d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42368 - 6267 "HINFO IN 5435029885293728505.823944855078304674. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01395632s
	
	
	==> coredns [e3e8c2298b56ec80c9ca93adc51c400869cfa9de1456a438291ff114d3828073] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47145 - 59122 "HINFO IN 5825991859903076673.5742092900425192487. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03319501s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-544242
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-544242
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=functional-544242
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_20_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:20:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-544242
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:33:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:30:57 +0000   Mon, 13 Oct 2025 22:20:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:30:57 +0000   Mon, 13 Oct 2025 22:20:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:30:57 +0000   Mon, 13 Oct 2025 22:20:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:30:57 +0000   Mon, 13 Oct 2025 22:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-544242
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                791dc4cd-e994-4be3-843a-c075a465bac1
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-chkxw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  default                     hello-node-connect-7d85dfc575-c666d          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 coredns-66bc5c9577-9npmn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-544242                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-rmpd5                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-544242             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-544242    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-p4h5z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-544242             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-544242 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-544242 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-544242 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-544242 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-544242 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-544242 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                node-controller  Node functional-544242 event: Registered Node functional-544242 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-544242 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-544242 event: Registered Node functional-544242 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-544242 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-544242 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-544242 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-544242 event: Registered Node functional-544242 in Controller
	
	
	==> dmesg <==
	[Oct13 21:01] hrtimer: interrupt took 13518544 ns
	[Oct13 22:12] kauditd_printk_skb: 8 callbacks suppressed
	[Oct13 22:13] overlayfs: idmapped layers are currently not supported
	[  +0.064178] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct13 22:19] overlayfs: idmapped layers are currently not supported
	[Oct13 22:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5a51573795eb5d343444d5c7f34fe7c5176ebdd4827a61b79fec2fb6807b8a22] <==
	{"level":"warn","ts":"2025-10-13T22:22:35.573845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.595817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.613986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.632092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.648019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.662501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.680188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.731639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.751916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.776919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.824360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.825154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.831245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.876527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.877231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.895897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.906979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.926784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.959877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:35.995176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:36.043778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:22:36.189465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46986","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T22:32:34.717644Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1118}
	{"level":"info","ts":"2025-10-13T22:32:34.741058Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1118,"took":"23.056925ms","hash":1471084201,"current-db-size-bytes":3198976,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1392640,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-13T22:32:34.741110Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1471084201,"revision":1118,"compact-revision":-1}
	
	
	==> etcd [d4168ec34011c29cf36041daf5a90eb62ccfa440ca03ed8e7133fa85c01e315b] <==
	{"level":"warn","ts":"2025-10-13T22:21:51.312562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:21:51.327016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:21:51.354323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:21:51.379647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:21:51.402749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:21:51.423905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:21:51.535534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46752","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T22:22:14.822602Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T22:22:14.822659Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-544242","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-13T22:22:14.822765Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T22:22:14.974974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T22:22:14.976567Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T22:22:14.976640Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T22:22:14.976750Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T22:22:14.976775Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T22:22:14.976804Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T22:22:14.976816Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-13T22:22:14.976785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T22:22:14.976832Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-13T22:22:14.976936Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T22:22:14.976948Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-13T22:22:14.981029Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-13T22:22:14.981127Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T22:22:14.981159Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-13T22:22:14.981172Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-544242","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:33:15 up  2:15,  0 user,  load average: 0.49, 0.49, 1.71
	Linux functional-544242 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [96f3e07a7d574fe48a887a613d5a047cb6a0fd926d3ce6ba1183d6c019a313ce] <==
	I1013 22:31:08.872313       1 main.go:301] handling current node
	I1013 22:31:18.872331       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:31:18.872382       1 main.go:301] handling current node
	I1013 22:31:28.872224       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:31:28.872257       1 main.go:301] handling current node
	I1013 22:31:38.871865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:31:38.871970       1 main.go:301] handling current node
	I1013 22:31:48.873104       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:31:48.873145       1 main.go:301] handling current node
	I1013 22:31:58.875247       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:31:58.875282       1 main.go:301] handling current node
	I1013 22:32:08.872361       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:32:08.872420       1 main.go:301] handling current node
	I1013 22:32:18.872345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:32:18.872379       1 main.go:301] handling current node
	I1013 22:32:28.872362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:32:28.872396       1 main.go:301] handling current node
	I1013 22:32:38.875160       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:32:38.875268       1 main.go:301] handling current node
	I1013 22:32:48.872271       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:32:48.872305       1 main.go:301] handling current node
	I1013 22:32:58.872214       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:32:58.872245       1 main.go:301] handling current node
	I1013 22:33:08.875274       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:33:08.875315       1 main.go:301] handling current node
	
	
	==> kindnet [a5e8f28e3b86b1e3014c7b8ebb9251fdcc64558359f46ea9768bf8428d6733cb] <==
	I1013 22:21:47.532320       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:21:47.534297       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1013 22:21:47.534458       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:21:47.534471       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:21:47.534487       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:21:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:21:47.806729       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:21:47.806759       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:21:47.806771       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:21:47.807746       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:21:52.707992       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:21:52.708037       1 metrics.go:72] Registering metrics
	I1013 22:21:52.708110       1 controller.go:711] "Syncing nftables rules"
	I1013 22:21:57.720322       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:21:57.720407       1 main.go:301] handling current node
	I1013 22:22:07.720288       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 22:22:07.720401       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6b567b4f310165f02a57d8c9ce03623cb8cfeeed4e0a20d55fab8a3f10890198] <==
	I1013 22:22:37.330162       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:22:37.339141       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 22:22:37.341483       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:22:37.342425       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 22:22:37.348727       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:22:37.349115       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 22:22:37.366006       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 22:22:37.399128       1 cache.go:39] Caches are synced for autoregister controller
	E1013 22:22:37.411994       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:22:38.015312       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:22:38.232820       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:22:39.282620       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 22:22:39.433102       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:22:39.513405       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:22:39.521545       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:22:55.973982       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:22:56.598139       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.72.205"}
	I1013 22:22:56.620324       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:23:03.035249       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.180.140"}
	I1013 22:23:13.506128       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:23:13.629180       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.8.92"}
	E1013 22:23:18.816333       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:59492: use of closed network connection
	E1013 22:23:19.458781       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1013 22:23:26.967944       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.9.180"}
	I1013 22:32:37.271274       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [832fbd611a153fda23fa1c67ab46a8ad6cc070880110848596c123a17de6a665] <==
	I1013 22:22:40.233833       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:22:40.241486       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:22:40.241591       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:22:40.241606       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:22:40.241613       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:22:40.242572       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:22:40.255223       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:22:40.258977       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:22:40.268730       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:22:40.269145       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:22:40.269341       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 22:22:40.269558       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:22:40.271874       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 22:22:40.277167       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:22:40.292615       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:22:40.292756       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:22:40.292854       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-544242"
	I1013 22:22:40.292908       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 22:22:40.292976       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:22:40.293257       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 22:22:40.295768       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:22:40.295830       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:22:40.295983       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:22:40.299513       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 22:22:40.315197       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-controller-manager [fe7756389e8a840aa61e637bdbba2aacbbf0738feaa7fc01ad57e37e60c3564a] <==
	I1013 22:21:55.790127       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:21:55.790277       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:21:55.791746       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:21:55.792785       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 22:21:55.795930       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 22:21:55.798243       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:21:55.806510       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 22:21:55.807765       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 22:21:55.808903       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:21:55.811168       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:21:55.812349       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:21:55.812498       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:21:55.815652       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:21:55.828201       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:21:55.828209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:21:55.828341       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:21:55.828348       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:21:55.828224       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:21:55.829027       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:21:55.829164       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-544242"
	I1013 22:21:55.829233       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 22:21:55.828235       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:21:55.828246       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 22:21:55.828261       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:21:55.843467       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [995363469e07d5c97dbe67abc2b9f975396843596dfcdd71445d9e9068eed285] <==
	I1013 22:21:50.771049       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:21:51.537610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:21:52.639210       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:21:52.647248       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 22:21:52.647561       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:21:52.810729       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:21:52.810878       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:21:52.815866       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:21:52.816271       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:21:52.816476       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:21:52.817821       1 config.go:200] "Starting service config controller"
	I1013 22:21:52.817896       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:21:52.817951       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:21:52.817980       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:21:52.818029       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:21:52.818055       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:21:52.818740       1 config.go:309] "Starting node config controller"
	I1013 22:21:52.818799       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:21:52.818827       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:21:52.926016       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:21:52.929409       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:21:52.933368       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e66c21acfba7215da58896cba722f519d63d341d54976981f2622933269752e7] <==
	I1013 22:22:38.625407       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:22:38.714247       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:22:38.817097       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:22:38.817207       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 22:22:38.817353       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:22:38.939975       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:22:38.940044       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:22:38.981614       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:22:38.982054       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:22:38.983015       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:22:38.984467       1 config.go:200] "Starting service config controller"
	I1013 22:22:38.984542       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:22:38.984598       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:22:38.984644       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:22:38.984680       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:22:38.984705       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:22:38.986435       1 config.go:309] "Starting node config controller"
	I1013 22:22:38.986509       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:22:38.986541       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:22:39.085074       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:22:39.085169       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:22:39.085187       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f0f45b81e796089c0b53186162c08ff1c5db16eed8ca02c468dfcd9c4804e6cd] <==
	I1013 22:22:35.155185       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:22:38.262240       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:22:38.262335       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:22:38.272246       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:22:38.272398       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:22:38.272420       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:22:38.272452       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:22:38.280695       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:22:38.287169       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:22:38.287295       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:22:38.287336       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:22:38.373136       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 22:22:38.388069       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:22:38.388174       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [f8069a3687d15a066b6c3c33c8a4d4ac621315aaf15df85f9f9ce96f25f56363] <==
	I1013 22:21:49.656531       1 serving.go:386] Generated self-signed cert in-memory
	W1013 22:21:52.404210       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 22:21:52.404320       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 22:21:52.404355       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 22:21:52.404384       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 22:21:52.491257       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:21:52.491286       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:21:52.510077       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:21:52.513260       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:21:52.531803       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:21:52.513286       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:21:52.635985       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:22:14.830705       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:22:14.830833       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 22:22:14.830916       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 22:22:14.830945       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 22:22:14.830978       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 22:22:14.830992       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 13 22:30:39 functional-544242 kubelet[3839]: E1013 22:30:39.139359    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:30:46 functional-544242 kubelet[3839]: E1013 22:30:46.137869    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	Oct 13 22:30:54 functional-544242 kubelet[3839]: E1013 22:30:54.137937    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:30:59 functional-544242 kubelet[3839]: E1013 22:30:59.138420    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	Oct 13 22:31:05 functional-544242 kubelet[3839]: E1013 22:31:05.137915    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:31:14 functional-544242 kubelet[3839]: E1013 22:31:14.137680    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	Oct 13 22:31:16 functional-544242 kubelet[3839]: E1013 22:31:16.137969    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:31:25 functional-544242 kubelet[3839]: E1013 22:31:25.137287    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	Oct 13 22:31:27 functional-544242 kubelet[3839]: E1013 22:31:27.139066    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:31:39 functional-544242 kubelet[3839]: E1013 22:31:39.138908    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	Oct 13 22:31:40 functional-544242 kubelet[3839]: E1013 22:31:40.137314    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:31:51 functional-544242 kubelet[3839]: E1013 22:31:51.137962    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	Oct 13 22:31:54 functional-544242 kubelet[3839]: E1013 22:31:54.137066    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:32:02 functional-544242 kubelet[3839]: E1013 22:32:02.137779    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	Oct 13 22:32:05 functional-544242 kubelet[3839]: E1013 22:32:05.138256    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:32:13 functional-544242 kubelet[3839]: E1013 22:32:13.138587    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	Oct 13 22:32:19 functional-544242 kubelet[3839]: E1013 22:32:19.137974    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:32:26 functional-544242 kubelet[3839]: E1013 22:32:26.138030    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	Oct 13 22:32:32 functional-544242 kubelet[3839]: E1013 22:32:32.137981    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:32:41 functional-544242 kubelet[3839]: E1013 22:32:41.138144    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	Oct 13 22:32:43 functional-544242 kubelet[3839]: E1013 22:32:43.137776    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:32:55 functional-544242 kubelet[3839]: E1013 22:32:55.137847    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:32:56 functional-544242 kubelet[3839]: E1013 22:32:56.137488    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	Oct 13 22:33:07 functional-544242 kubelet[3839]: E1013 22:33:07.138545    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c666d" podUID="73896e22-ffc7-4f50-82bd-a27eb7bd3d49"
	Oct 13 22:33:10 functional-544242 kubelet[3839]: E1013 22:33:10.137464    3839 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-chkxw" podUID="e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7"
	
	
	==> storage-provisioner [0a2a355fe7ffd010eb87066090518173f78f9bfc886d87acd48f56cdda200a06] <==
	W1013 22:32:50.746481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:32:52.749670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:32:52.754257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:32:54.757295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:32:54.764240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:32:56.767368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:32:56.778445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:32:58.781850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:32:58.788698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:00.791688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:00.796181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:02.799912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:02.806684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:04.809340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:04.814224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:06.817743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:06.822404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:08.825365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:08.832054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:10.835060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:10.839582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:12.842580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:12.850455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:14.861461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:33:14.868897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [1daa05c335e205802919cfe6c36cf61da6815343a06469e1ebce08f81d343beb] <==
	I1013 22:21:48.052097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:21:52.691820       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:21:52.692326       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:21:52.709937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:21:56.174518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:22:00.447410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:22:04.046218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:22:07.100653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:22:10.122999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:22:10.128353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:22:10.128601       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:22:10.128780       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-544242_83671bde-6ab6-47e9-bf05-18370a1a37c6!
	I1013 22:22:10.129751       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3b7d2332-678c-4352-bce5-57ed98d295f4", APIVersion:"v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-544242_83671bde-6ab6-47e9-bf05-18370a1a37c6 became leader
	W1013 22:22:10.135236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:22:10.148960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:22:10.229600       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-544242_83671bde-6ab6-47e9-bf05-18370a1a37c6!
	W1013 22:22:12.153577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:22:12.158568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:22:14.162068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:22:14.169763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-544242 -n functional-544242
helpers_test.go:269: (dbg) Run:  kubectl --context functional-544242 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-chkxw hello-node-connect-7d85dfc575-c666d
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-544242 describe pod hello-node-75c85bcc94-chkxw hello-node-connect-7d85dfc575-c666d
helpers_test.go:290: (dbg) kubectl --context functional-544242 describe pod hello-node-75c85bcc94-chkxw hello-node-connect-7d85dfc575-c666d:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-chkxw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-544242/192.168.49.2
	Start Time:       Mon, 13 Oct 2025 22:23:26 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sm52s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sm52s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m49s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-chkxw to functional-544242
	  Normal   Pulling    6m47s (x5 over 9m49s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m47s (x5 over 9m49s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m47s (x5 over 9m49s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m38s (x21 over 9m49s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m38s (x21 over 9m49s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-c666d
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-544242/192.168.49.2
	Start Time:       Mon, 13 Oct 2025 22:23:13 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q74qz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-q74qz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-c666d to functional-544242
	  Normal   Pulling    6m54s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m54s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m54s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m51s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m36s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-544242 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-544242 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-chkxw" [e5c0ebbd-06fc-4dff-ac95-a1e08824d8b7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1013 22:23:43.650012  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:25:59.782931  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:26:27.491515  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:30:59.783471  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-544242 -n functional-544242
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-13 22:33:27.412041616 +0000 UTC m=+1241.075598749
functional_test.go:1460: (dbg) Run:  kubectl --context functional-544242 describe po hello-node-75c85bcc94-chkxw -n default
functional_test.go:1460: (dbg) kubectl --context functional-544242 describe po hello-node-75c85bcc94-chkxw -n default:
Name:             hello-node-75c85bcc94-chkxw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-544242/192.168.49.2
Start Time:       Mon, 13 Oct 2025 22:23:26 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sm52s (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sm52s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-chkxw to functional-544242
Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-544242 logs hello-node-75c85bcc94-chkxw -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-544242 logs hello-node-75c85bcc94-chkxw -n default: exit status 1 (187.160135ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-chkxw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-544242 logs hello-node-75c85bcc94-chkxw -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 service --namespace=default --https --url hello-node: exit status 115 (507.982128ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30869
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-544242 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 service hello-node --url --format={{.IP}}: exit status 115 (587.65643ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-544242 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 service hello-node --url: exit status 115 (503.017023ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30869
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-544242 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30869
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image load --daemon kicbase/echo-server:functional-544242 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-544242 image load --daemon kicbase/echo-server:functional-544242 --alsologtostderr: (2.68148504s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-544242" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image load --daemon kicbase/echo-server:functional-544242 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-544242" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-544242
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image load --daemon kicbase/echo-server:functional-544242 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-544242" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image save kicbase/echo-server:functional-544242 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1013 22:33:42.839812  458110 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:33:42.840049  458110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:33:42.840063  458110 out.go:374] Setting ErrFile to fd 2...
	I1013 22:33:42.840069  458110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:33:42.840362  458110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:33:42.841028  458110 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:33:42.841151  458110 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:33:42.841645  458110 cli_runner.go:164] Run: docker container inspect functional-544242 --format={{.State.Status}}
	I1013 22:33:42.865215  458110 ssh_runner.go:195] Run: systemctl --version
	I1013 22:33:42.865282  458110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
	I1013 22:33:42.883535  458110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
	I1013 22:33:42.985931  458110 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1013 22:33:42.985990  458110 cache_images.go:254] Failed to load cached images for "functional-544242": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1013 22:33:42.986008  458110 cache_images.go:266] failed pushing to: functional-544242

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-544242
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image save --daemon kicbase/echo-server:functional-544242 --alsologtostderr
2025/10/13 22:33:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-544242
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-544242: exit status 1 (19.357086ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-544242

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-544242

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.12s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-381214 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-381214 --output=json --user=testUser: exit status 80 (2.122492077s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7d442e81-1e37-45fd-856e-5d4e6e2ccb06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-381214 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"c08e9e13-b5ea-44cb-98b8-3b285f77cbaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-13T22:46:40Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"d13525c6-edcd-4f2e-b031-792b09ba9907","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-381214 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.12s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-381214 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-381214 --output=json --user=testUser: exit status 80 (1.509045672s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f64067e5-ec18-49c2-a0bc-1ac527d65ee2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-381214 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"b71391d7-737c-4474-a914-4d52f6828bee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-13T22:46:42Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"2d206e32-18db-4524-9156-61e40b1ea8ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-381214 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.51s)

                                                
                                    
x
+
TestPause/serial/Pause (7.3s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-836584 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-836584 --alsologtostderr -v=5: exit status 80 (2.104581822s)

                                                
                                                
-- stdout --
	* Pausing node pause-836584 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 23:08:57.935248  591258 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:08:57.936114  591258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:08:57.936151  591258 out.go:374] Setting ErrFile to fd 2...
	I1013 23:08:57.936171  591258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:08:57.936488  591258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:08:57.936799  591258 out.go:368] Setting JSON to false
	I1013 23:08:57.936853  591258 mustload.go:65] Loading cluster: pause-836584
	I1013 23:08:57.937319  591258 config.go:182] Loaded profile config "pause-836584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:08:57.937827  591258 cli_runner.go:164] Run: docker container inspect pause-836584 --format={{.State.Status}}
	I1013 23:08:57.954837  591258 host.go:66] Checking if "pause-836584" exists ...
	I1013 23:08:57.955229  591258 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:08:58.023374  591258 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 23:08:58.012322669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:08:58.024067  591258 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-836584 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 23:08:58.027286  591258 out.go:179] * Pausing node pause-836584 ... 
	I1013 23:08:58.030946  591258 host.go:66] Checking if "pause-836584" exists ...
	I1013 23:08:58.031369  591258 ssh_runner.go:195] Run: systemctl --version
	I1013 23:08:58.031424  591258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:58.049864  591258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/pause-836584/id_rsa Username:docker}
	I1013 23:08:58.154054  591258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:08:58.169736  591258 pause.go:52] kubelet running: true
	I1013 23:08:58.169804  591258 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:08:58.402183  591258 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:08:58.402325  591258 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:08:58.466228  591258 cri.go:89] found id: "82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec"
	I1013 23:08:58.466250  591258 cri.go:89] found id: "1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca"
	I1013 23:08:58.466256  591258 cri.go:89] found id: "f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62"
	I1013 23:08:58.466265  591258 cri.go:89] found id: "58af9ab254a42fc1621d3c20990e94448e872636dcb69301be14e6dd6a30eeac"
	I1013 23:08:58.466269  591258 cri.go:89] found id: "cebeaada2da3d992b3ba1b12610ac388621e7bd8f90e348a4c320078cffa1b8c"
	I1013 23:08:58.466272  591258 cri.go:89] found id: "cc7bd33116bc4acc38466c7d562ff96f74af865dd7aa4909cb16a23f999c0b25"
	I1013 23:08:58.466285  591258 cri.go:89] found id: "847763d2657e4ac8786f744228c853320d4ec0e12d75ac4e02a1aa292b61ebbd"
	I1013 23:08:58.466289  591258 cri.go:89] found id: "932b463244a48a0b94454f7f8b25fcdb1321327bc02108890c047545d029ad69"
	I1013 23:08:58.466292  591258 cri.go:89] found id: "05d9739f491591937305aba50241654d336905d5b240337bdc5473cfd033d010"
	I1013 23:08:58.466298  591258 cri.go:89] found id: "aa8add4e7d15a8ac00cbc64d8c811002ca4df2c7d35010ed80b0716c6d03c5d9"
	I1013 23:08:58.466306  591258 cri.go:89] found id: "87d15b6caafad41c214e6fddcdcac2921d6badaad67a7da35289ff2b4d03b3b2"
	I1013 23:08:58.466314  591258 cri.go:89] found id: "f762c1241881f6b4b7eadb8f872ebdb0e3014eeb7279ff5eb1a0bda2e10750b0"
	I1013 23:08:58.466317  591258 cri.go:89] found id: "7a4920c5bd3032d85edce22c6b1d7e7faf5b3388d83c5cdedc99275b911fb334"
	I1013 23:08:58.466321  591258 cri.go:89] found id: "ad46aca3d42a7feeaea2b43fc17f4072c073fe9678b4c352198922c9e22c88aa"
	I1013 23:08:58.466324  591258 cri.go:89] found id: ""
	I1013 23:08:58.466386  591258 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:08:58.477600  591258 retry.go:31] will retry after 210.034771ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:08:58Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:08:58.687868  591258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:08:58.701368  591258 pause.go:52] kubelet running: false
	I1013 23:08:58.701433  591258 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:08:58.848344  591258 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:08:58.848470  591258 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:08:58.924708  591258 cri.go:89] found id: "82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec"
	I1013 23:08:58.924730  591258 cri.go:89] found id: "1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca"
	I1013 23:08:58.924736  591258 cri.go:89] found id: "f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62"
	I1013 23:08:58.924739  591258 cri.go:89] found id: "58af9ab254a42fc1621d3c20990e94448e872636dcb69301be14e6dd6a30eeac"
	I1013 23:08:58.924743  591258 cri.go:89] found id: "cebeaada2da3d992b3ba1b12610ac388621e7bd8f90e348a4c320078cffa1b8c"
	I1013 23:08:58.924748  591258 cri.go:89] found id: "cc7bd33116bc4acc38466c7d562ff96f74af865dd7aa4909cb16a23f999c0b25"
	I1013 23:08:58.924751  591258 cri.go:89] found id: "847763d2657e4ac8786f744228c853320d4ec0e12d75ac4e02a1aa292b61ebbd"
	I1013 23:08:58.924754  591258 cri.go:89] found id: "932b463244a48a0b94454f7f8b25fcdb1321327bc02108890c047545d029ad69"
	I1013 23:08:58.924757  591258 cri.go:89] found id: "05d9739f491591937305aba50241654d336905d5b240337bdc5473cfd033d010"
	I1013 23:08:58.924763  591258 cri.go:89] found id: "aa8add4e7d15a8ac00cbc64d8c811002ca4df2c7d35010ed80b0716c6d03c5d9"
	I1013 23:08:58.924766  591258 cri.go:89] found id: "87d15b6caafad41c214e6fddcdcac2921d6badaad67a7da35289ff2b4d03b3b2"
	I1013 23:08:58.924770  591258 cri.go:89] found id: "f762c1241881f6b4b7eadb8f872ebdb0e3014eeb7279ff5eb1a0bda2e10750b0"
	I1013 23:08:58.924773  591258 cri.go:89] found id: "7a4920c5bd3032d85edce22c6b1d7e7faf5b3388d83c5cdedc99275b911fb334"
	I1013 23:08:58.924776  591258 cri.go:89] found id: "ad46aca3d42a7feeaea2b43fc17f4072c073fe9678b4c352198922c9e22c88aa"
	I1013 23:08:58.924779  591258 cri.go:89] found id: ""
	I1013 23:08:58.924828  591258 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:08:58.935551  591258 retry.go:31] will retry after 253.516276ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:08:58Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:08:59.190131  591258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:08:59.203250  591258 pause.go:52] kubelet running: false
	I1013 23:08:59.203362  591258 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:08:59.341490  591258 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:08:59.341574  591258 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:08:59.408624  591258 cri.go:89] found id: "82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec"
	I1013 23:08:59.408647  591258 cri.go:89] found id: "1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca"
	I1013 23:08:59.408652  591258 cri.go:89] found id: "f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62"
	I1013 23:08:59.408656  591258 cri.go:89] found id: "58af9ab254a42fc1621d3c20990e94448e872636dcb69301be14e6dd6a30eeac"
	I1013 23:08:59.408659  591258 cri.go:89] found id: "cebeaada2da3d992b3ba1b12610ac388621e7bd8f90e348a4c320078cffa1b8c"
	I1013 23:08:59.408662  591258 cri.go:89] found id: "cc7bd33116bc4acc38466c7d562ff96f74af865dd7aa4909cb16a23f999c0b25"
	I1013 23:08:59.408665  591258 cri.go:89] found id: "847763d2657e4ac8786f744228c853320d4ec0e12d75ac4e02a1aa292b61ebbd"
	I1013 23:08:59.408668  591258 cri.go:89] found id: "932b463244a48a0b94454f7f8b25fcdb1321327bc02108890c047545d029ad69"
	I1013 23:08:59.408671  591258 cri.go:89] found id: "05d9739f491591937305aba50241654d336905d5b240337bdc5473cfd033d010"
	I1013 23:08:59.408677  591258 cri.go:89] found id: "aa8add4e7d15a8ac00cbc64d8c811002ca4df2c7d35010ed80b0716c6d03c5d9"
	I1013 23:08:59.408680  591258 cri.go:89] found id: "87d15b6caafad41c214e6fddcdcac2921d6badaad67a7da35289ff2b4d03b3b2"
	I1013 23:08:59.408683  591258 cri.go:89] found id: "f762c1241881f6b4b7eadb8f872ebdb0e3014eeb7279ff5eb1a0bda2e10750b0"
	I1013 23:08:59.408686  591258 cri.go:89] found id: "7a4920c5bd3032d85edce22c6b1d7e7faf5b3388d83c5cdedc99275b911fb334"
	I1013 23:08:59.408691  591258 cri.go:89] found id: "ad46aca3d42a7feeaea2b43fc17f4072c073fe9678b4c352198922c9e22c88aa"
	I1013 23:08:59.408694  591258 cri.go:89] found id: ""
	I1013 23:08:59.408746  591258 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:08:59.419060  591258 retry.go:31] will retry after 300.705713ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:08:59Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:08:59.720653  591258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:08:59.733498  591258 pause.go:52] kubelet running: false
	I1013 23:08:59.733599  591258 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:08:59.887917  591258 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:08:59.888006  591258 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:08:59.957125  591258 cri.go:89] found id: "82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec"
	I1013 23:08:59.957198  591258 cri.go:89] found id: "1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca"
	I1013 23:08:59.957216  591258 cri.go:89] found id: "f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62"
	I1013 23:08:59.957233  591258 cri.go:89] found id: "58af9ab254a42fc1621d3c20990e94448e872636dcb69301be14e6dd6a30eeac"
	I1013 23:08:59.957263  591258 cri.go:89] found id: "cebeaada2da3d992b3ba1b12610ac388621e7bd8f90e348a4c320078cffa1b8c"
	I1013 23:08:59.957287  591258 cri.go:89] found id: "cc7bd33116bc4acc38466c7d562ff96f74af865dd7aa4909cb16a23f999c0b25"
	I1013 23:08:59.957305  591258 cri.go:89] found id: "847763d2657e4ac8786f744228c853320d4ec0e12d75ac4e02a1aa292b61ebbd"
	I1013 23:08:59.957324  591258 cri.go:89] found id: "932b463244a48a0b94454f7f8b25fcdb1321327bc02108890c047545d029ad69"
	I1013 23:08:59.957343  591258 cri.go:89] found id: "05d9739f491591937305aba50241654d336905d5b240337bdc5473cfd033d010"
	I1013 23:08:59.957375  591258 cri.go:89] found id: "aa8add4e7d15a8ac00cbc64d8c811002ca4df2c7d35010ed80b0716c6d03c5d9"
	I1013 23:08:59.957400  591258 cri.go:89] found id: "87d15b6caafad41c214e6fddcdcac2921d6badaad67a7da35289ff2b4d03b3b2"
	I1013 23:08:59.957420  591258 cri.go:89] found id: "f762c1241881f6b4b7eadb8f872ebdb0e3014eeb7279ff5eb1a0bda2e10750b0"
	I1013 23:08:59.957439  591258 cri.go:89] found id: "7a4920c5bd3032d85edce22c6b1d7e7faf5b3388d83c5cdedc99275b911fb334"
	I1013 23:08:59.957482  591258 cri.go:89] found id: "ad46aca3d42a7feeaea2b43fc17f4072c073fe9678b4c352198922c9e22c88aa"
	I1013 23:08:59.957506  591258 cri.go:89] found id: ""
	I1013 23:08:59.957582  591258 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:08:59.972090  591258 out.go:203] 
	W1013 23:08:59.975255  591258 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:08:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:08:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 23:08:59.975463  591258 out.go:285] * 
	* 
	W1013 23:08:59.983444  591258 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 23:08:59.986455  591258 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-836584 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-836584
helpers_test.go:243: (dbg) docker inspect pause-836584:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9",
	        "Created": "2025-10-13T23:07:10.814555889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 585357,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:07:10.876003415Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9/hostname",
	        "HostsPath": "/var/lib/docker/containers/0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9/hosts",
	        "LogPath": "/var/lib/docker/containers/0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9/0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9-json.log",
	        "Name": "/pause-836584",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-836584:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-836584",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9",
	                "LowerDir": "/var/lib/docker/overlay2/cb4feadd2cb323f3709440ab60018a6853a4c29a060536f792a4a814f3c7078a-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb4feadd2cb323f3709440ab60018a6853a4c29a060536f792a4a814f3c7078a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb4feadd2cb323f3709440ab60018a6853a4c29a060536f792a4a814f3c7078a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb4feadd2cb323f3709440ab60018a6853a4c29a060536f792a4a814f3c7078a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-836584",
	                "Source": "/var/lib/docker/volumes/pause-836584/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-836584",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-836584",
	                "name.minikube.sigs.k8s.io": "pause-836584",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5cd0182f669995fd9c8d514126c8de834d8f2f4400daa4f90e2ebe3e46891a4b",
	            "SandboxKey": "/var/run/docker/netns/5cd0182f6699",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-836584": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:e8:c4:34:82:a4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "786fd759c03007698ab28769589888093299db9eb6fb29c4eea9eadee6b21ed9",
	                    "EndpointID": "7a0e9b546378a2e7b64ed42a91fdb70f1020f2c92620441a3b9d022e01574d48",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-836584",
	                        "0c2a622a7ec4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-836584 -n pause-836584
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-836584 -n pause-836584: exit status 2 (636.005363ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-836584 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-836584 logs -n 25: (1.500727856s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-762540 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:03 UTC │ 13 Oct 25 23:03 UTC │
	│ start   │ -p missing-upgrade-354983 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-354983    │ jenkins │ v1.32.0 │ 13 Oct 25 23:03 UTC │ 13 Oct 25 23:04 UTC │
	│ start   │ -p NoKubernetes-762540 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:03 UTC │ 13 Oct 25 23:04 UTC │
	│ delete  │ -p NoKubernetes-762540                                                                                                                   │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:04 UTC │
	│ start   │ -p missing-upgrade-354983 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-354983    │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:05 UTC │
	│ start   │ -p NoKubernetes-762540 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:04 UTC │
	│ ssh     │ -p NoKubernetes-762540 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │                     │
	│ stop    │ -p NoKubernetes-762540                                                                                                                   │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:04 UTC │
	│ start   │ -p NoKubernetes-762540 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:04 UTC │
	│ ssh     │ -p NoKubernetes-762540 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │                     │
	│ delete  │ -p NoKubernetes-762540                                                                                                                   │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:04 UTC │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:05 UTC │
	│ delete  │ -p missing-upgrade-354983                                                                                                                │ missing-upgrade-354983    │ jenkins │ v1.37.0 │ 13 Oct 25 23:05 UTC │ 13 Oct 25 23:05 UTC │
	│ start   │ -p stopped-upgrade-633601 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-633601    │ jenkins │ v1.32.0 │ 13 Oct 25 23:05 UTC │ 13 Oct 25 23:05 UTC │
	│ stop    │ -p kubernetes-upgrade-211312                                                                                                             │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:05 UTC │ 13 Oct 25 23:05 UTC │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:05 UTC │                     │
	│ stop    │ stopped-upgrade-633601 stop                                                                                                              │ stopped-upgrade-633601    │ jenkins │ v1.32.0 │ 13 Oct 25 23:05 UTC │ 13 Oct 25 23:05 UTC │
	│ start   │ -p stopped-upgrade-633601 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-633601    │ jenkins │ v1.37.0 │ 13 Oct 25 23:05 UTC │ 13 Oct 25 23:06 UTC │
	│ delete  │ -p stopped-upgrade-633601                                                                                                                │ stopped-upgrade-633601    │ jenkins │ v1.37.0 │ 13 Oct 25 23:06 UTC │ 13 Oct 25 23:06 UTC │
	│ start   │ -p running-upgrade-276330 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-276330    │ jenkins │ v1.32.0 │ 13 Oct 25 23:06 UTC │ 13 Oct 25 23:06 UTC │
	│ start   │ -p running-upgrade-276330 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-276330    │ jenkins │ v1.37.0 │ 13 Oct 25 23:06 UTC │ 13 Oct 25 23:07 UTC │
	│ delete  │ -p running-upgrade-276330                                                                                                                │ running-upgrade-276330    │ jenkins │ v1.37.0 │ 13 Oct 25 23:07 UTC │ 13 Oct 25 23:07 UTC │
	│ start   │ -p pause-836584 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-836584              │ jenkins │ v1.37.0 │ 13 Oct 25 23:07 UTC │ 13 Oct 25 23:08 UTC │
	│ start   │ -p pause-836584 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-836584              │ jenkins │ v1.37.0 │ 13 Oct 25 23:08 UTC │ 13 Oct 25 23:08 UTC │
	│ pause   │ -p pause-836584 --alsologtostderr -v=5                                                                                                   │ pause-836584              │ jenkins │ v1.37.0 │ 13 Oct 25 23:08 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:08:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:08:25.826112  589554 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:08:25.826686  589554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:08:25.826720  589554 out.go:374] Setting ErrFile to fd 2...
	I1013 23:08:25.826742  589554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:08:25.827043  589554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:08:25.827469  589554 out.go:368] Setting JSON to false
	I1013 23:08:25.828484  589554 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10242,"bootTime":1760386664,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:08:25.828582  589554 start.go:141] virtualization:  
	I1013 23:08:25.831765  589554 out.go:179] * [pause-836584] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:08:25.835702  589554 notify.go:220] Checking for updates...
	I1013 23:08:25.836577  589554 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:08:25.840252  589554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:08:25.843249  589554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:08:25.846151  589554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:08:25.849091  589554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:08:25.853090  589554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:08:25.856593  589554 config.go:182] Loaded profile config "pause-836584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:08:25.857317  589554 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:08:25.888817  589554 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:08:25.888944  589554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:08:25.976172  589554 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 23:08:25.965913228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:08:25.976276  589554 docker.go:318] overlay module found
	I1013 23:08:25.979744  589554 out.go:179] * Using the docker driver based on existing profile
	I1013 23:08:25.982676  589554 start.go:305] selected driver: docker
	I1013 23:08:25.982693  589554 start.go:925] validating driver "docker" against &{Name:pause-836584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-836584 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:08:25.982870  589554 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:08:25.982986  589554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:08:26.083296  589554 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 23:08:26.072163796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:08:26.083708  589554 cni.go:84] Creating CNI manager for ""
	I1013 23:08:26.083767  589554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:08:26.083812  589554 start.go:349] cluster config:
	{Name:pause-836584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-836584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:08:26.087011  589554 out.go:179] * Starting "pause-836584" primary control-plane node in "pause-836584" cluster
	I1013 23:08:26.089777  589554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:08:26.092758  589554 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:08:26.095572  589554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:08:26.095628  589554 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:08:26.095640  589554 cache.go:58] Caching tarball of preloaded images
	I1013 23:08:26.095728  589554 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:08:26.095738  589554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:08:26.095880  589554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/config.json ...
	I1013 23:08:26.096182  589554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:08:26.125358  589554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:08:26.125378  589554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:08:26.125398  589554 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:08:26.125420  589554 start.go:360] acquireMachinesLock for pause-836584: {Name:mka7814e49a7b0446c04d5da0315da29b4254871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:08:26.125473  589554 start.go:364] duration metric: took 37.956µs to acquireMachinesLock for "pause-836584"
	I1013 23:08:26.125492  589554 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:08:26.125502  589554 fix.go:54] fixHost starting: 
	I1013 23:08:26.125775  589554 cli_runner.go:164] Run: docker container inspect pause-836584 --format={{.State.Status}}
	I1013 23:08:26.158350  589554 fix.go:112] recreateIfNeeded on pause-836584: state=Running err=<nil>
	W1013 23:08:26.158383  589554 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 23:08:22.465707  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:08:22.466144  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 23:08:22.466219  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:22.466302  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:22.492291  576356 cri.go:89] found id: "a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:22.492310  576356 cri.go:89] found id: ""
	I1013 23:08:22.492318  576356 logs.go:282] 1 containers: [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878]
	I1013 23:08:22.492374  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:22.497453  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:08:22.497573  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:08:22.524860  576356 cri.go:89] found id: ""
	I1013 23:08:22.524940  576356 logs.go:282] 0 containers: []
	W1013 23:08:22.524954  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:08:22.524962  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:08:22.525018  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:08:22.550687  576356 cri.go:89] found id: ""
	I1013 23:08:22.550715  576356 logs.go:282] 0 containers: []
	W1013 23:08:22.550725  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:08:22.550732  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:08:22.550844  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:08:22.578955  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:22.579031  576356 cri.go:89] found id: ""
	I1013 23:08:22.579055  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:08:22.579160  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:22.583483  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:08:22.583554  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:08:22.610385  576356 cri.go:89] found id: ""
	I1013 23:08:22.610411  576356 logs.go:282] 0 containers: []
	W1013 23:08:22.610420  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:08:22.610426  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:08:22.610541  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:08:22.637325  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:22.637345  576356 cri.go:89] found id: "e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53"
	I1013 23:08:22.637351  576356 cri.go:89] found id: ""
	I1013 23:08:22.637386  576356 logs.go:282] 2 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17 e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53]
	I1013 23:08:22.637445  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:22.641125  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:22.644584  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:08:22.644721  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:08:22.671130  576356 cri.go:89] found id: ""
	I1013 23:08:22.671161  576356 logs.go:282] 0 containers: []
	W1013 23:08:22.671170  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:08:22.671177  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:08:22.671233  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:08:22.698431  576356 cri.go:89] found id: ""
	I1013 23:08:22.698457  576356 logs.go:282] 0 containers: []
	W1013 23:08:22.698467  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:08:22.698480  576356 logs.go:123] Gathering logs for kube-controller-manager [e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53] ...
	I1013 23:08:22.698493  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53"
	I1013 23:08:22.725777  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:08:22.725802  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:08:22.781676  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:08:22.781712  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:08:22.815315  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:08:22.815344  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 23:08:22.930391  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:08:22.930467  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:08:22.948545  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:08:22.948616  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 23:08:23.033366  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 23:08:23.033442  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:08:23.033471  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:23.089528  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:08:23.089566  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:23.128827  576356 logs.go:123] Gathering logs for kube-apiserver [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878] ...
	I1013 23:08:23.128860  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:25.681056  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:08:25.681447  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 23:08:25.681495  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:25.681555  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:25.724111  576356 cri.go:89] found id: "a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:25.724131  576356 cri.go:89] found id: ""
	I1013 23:08:25.724139  576356 logs.go:282] 1 containers: [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878]
	I1013 23:08:25.724195  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:25.730940  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:08:25.731013  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:08:25.783070  576356 cri.go:89] found id: ""
	I1013 23:08:25.783182  576356 logs.go:282] 0 containers: []
	W1013 23:08:25.783191  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:08:25.783203  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:08:25.783258  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:08:25.812938  576356 cri.go:89] found id: ""
	I1013 23:08:25.812961  576356 logs.go:282] 0 containers: []
	W1013 23:08:25.812970  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:08:25.812976  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:08:25.813035  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:08:25.846941  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:25.846964  576356 cri.go:89] found id: ""
	I1013 23:08:25.846972  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:08:25.847026  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:25.851218  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:08:25.851286  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:08:25.890851  576356 cri.go:89] found id: ""
	I1013 23:08:25.890871  576356 logs.go:282] 0 containers: []
	W1013 23:08:25.890879  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:08:25.890885  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:08:25.890947  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:08:25.935248  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:25.935265  576356 cri.go:89] found id: "e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53"
	I1013 23:08:25.935270  576356 cri.go:89] found id: ""
	I1013 23:08:25.935277  576356 logs.go:282] 2 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17 e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53]
	I1013 23:08:25.935336  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:25.939751  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:25.951300  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:08:25.951583  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:08:25.983229  576356 cri.go:89] found id: ""
	I1013 23:08:25.983247  576356 logs.go:282] 0 containers: []
	W1013 23:08:25.983255  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:08:25.983261  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:08:25.983308  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:08:26.024623  576356 cri.go:89] found id: ""
	I1013 23:08:26.024646  576356 logs.go:282] 0 containers: []
	W1013 23:08:26.024655  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:08:26.024668  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:08:26.024681  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:08:26.095940  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:08:26.095966  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:08:26.137198  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:08:26.137221  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 23:08:26.292658  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:08:26.292732  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:08:26.310211  576356 logs.go:123] Gathering logs for kube-controller-manager [e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53] ...
	I1013 23:08:26.310289  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53"
	I1013 23:08:26.338220  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:08:26.338246  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 23:08:26.425765  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 23:08:26.425786  576356 logs.go:123] Gathering logs for kube-apiserver [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878] ...
	I1013 23:08:26.425798  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:26.465120  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:08:26.465152  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:26.527414  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:08:26.527488  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:26.161994  589554 out.go:252] * Updating the running docker "pause-836584" container ...
	I1013 23:08:26.162031  589554 machine.go:93] provisionDockerMachine start ...
	I1013 23:08:26.162117  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:26.184875  589554 main.go:141] libmachine: Using SSH client type: native
	I1013 23:08:26.185217  589554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1013 23:08:26.185227  589554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:08:26.339461  589554 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-836584
	
	I1013 23:08:26.339536  589554 ubuntu.go:182] provisioning hostname "pause-836584"
	I1013 23:08:26.339629  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:26.362136  589554 main.go:141] libmachine: Using SSH client type: native
	I1013 23:08:26.362436  589554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1013 23:08:26.362457  589554 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-836584 && echo "pause-836584" | sudo tee /etc/hostname
	I1013 23:08:26.534714  589554 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-836584
	
	I1013 23:08:26.534835  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:26.556669  589554 main.go:141] libmachine: Using SSH client type: native
	I1013 23:08:26.556979  589554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1013 23:08:26.557003  589554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-836584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-836584/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-836584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:08:26.703540  589554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:08:26.703632  589554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:08:26.703683  589554 ubuntu.go:190] setting up certificates
	I1013 23:08:26.703712  589554 provision.go:84] configureAuth start
	I1013 23:08:26.703816  589554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-836584
	I1013 23:08:26.720783  589554 provision.go:143] copyHostCerts
	I1013 23:08:26.720861  589554 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:08:26.720880  589554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:08:26.720966  589554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:08:26.721075  589554 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:08:26.721081  589554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:08:26.721107  589554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:08:26.721161  589554 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:08:26.721166  589554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:08:26.721188  589554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:08:26.721239  589554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.pause-836584 san=[127.0.0.1 192.168.85.2 localhost minikube pause-836584]
	I1013 23:08:27.760579  589554 provision.go:177] copyRemoteCerts
	I1013 23:08:27.760650  589554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:08:27.760699  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:27.780470  589554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/pause-836584/id_rsa Username:docker}
	I1013 23:08:27.883120  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:08:27.901658  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 23:08:27.919286  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 23:08:27.937395  589554 provision.go:87] duration metric: took 1.233643877s to configureAuth
	I1013 23:08:27.937423  589554 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:08:27.937681  589554 config.go:182] Loaded profile config "pause-836584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:08:27.937799  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:27.955004  589554 main.go:141] libmachine: Using SSH client type: native
	I1013 23:08:27.955333  589554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1013 23:08:27.955357  589554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:08:29.059398  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:08:29.059845  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 23:08:29.059899  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:29.059958  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:29.084873  576356 cri.go:89] found id: "a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:29.084893  576356 cri.go:89] found id: ""
	I1013 23:08:29.084901  576356 logs.go:282] 1 containers: [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878]
	I1013 23:08:29.084978  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:29.088665  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:08:29.088740  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:08:29.113523  576356 cri.go:89] found id: ""
	I1013 23:08:29.113545  576356 logs.go:282] 0 containers: []
	W1013 23:08:29.113560  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:08:29.113567  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:08:29.113621  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:08:29.138843  576356 cri.go:89] found id: ""
	I1013 23:08:29.138866  576356 logs.go:282] 0 containers: []
	W1013 23:08:29.138874  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:08:29.138881  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:08:29.138936  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:08:29.193052  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:29.193076  576356 cri.go:89] found id: ""
	I1013 23:08:29.193085  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:08:29.193142  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:29.197709  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:08:29.197783  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:08:29.234166  576356 cri.go:89] found id: ""
	I1013 23:08:29.234206  576356 logs.go:282] 0 containers: []
	W1013 23:08:29.234216  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:08:29.234225  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:08:29.234293  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:08:29.266991  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:29.267016  576356 cri.go:89] found id: ""
	I1013 23:08:29.267025  576356 logs.go:282] 1 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17]
	I1013 23:08:29.267099  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:29.272390  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:08:29.272462  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:08:29.313348  576356 cri.go:89] found id: ""
	I1013 23:08:29.313371  576356 logs.go:282] 0 containers: []
	W1013 23:08:29.313380  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:08:29.313387  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:08:29.313443  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:08:29.349600  576356 cri.go:89] found id: ""
	I1013 23:08:29.349622  576356 logs.go:282] 0 containers: []
	W1013 23:08:29.349630  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:08:29.349662  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:08:29.349677  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:08:29.367155  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:08:29.367181  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1013 23:08:33.300311  589554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:08:33.300335  589554 machine.go:96] duration metric: took 7.138295607s to provisionDockerMachine
	I1013 23:08:33.300346  589554 start.go:293] postStartSetup for "pause-836584" (driver="docker")
	I1013 23:08:33.300357  589554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:08:33.300417  589554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:08:33.300465  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:33.320149  589554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/pause-836584/id_rsa Username:docker}
	I1013 23:08:33.427764  589554 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:08:33.431317  589554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:08:33.431346  589554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:08:33.431357  589554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:08:33.431415  589554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:08:33.431494  589554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:08:33.431604  589554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:08:33.439744  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:08:33.458466  589554 start.go:296] duration metric: took 158.105017ms for postStartSetup
	I1013 23:08:33.458549  589554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:08:33.458594  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:33.475914  589554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/pause-836584/id_rsa Username:docker}
	I1013 23:08:33.576873  589554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:08:33.582979  589554 fix.go:56] duration metric: took 7.457472962s for fixHost
	I1013 23:08:33.583006  589554 start.go:83] releasing machines lock for "pause-836584", held for 7.457524366s
	I1013 23:08:33.583090  589554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-836584
	I1013 23:08:33.600985  589554 ssh_runner.go:195] Run: cat /version.json
	I1013 23:08:33.601010  589554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:08:33.601046  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:33.601077  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:33.618727  589554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/pause-836584/id_rsa Username:docker}
	I1013 23:08:33.625401  589554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/pause-836584/id_rsa Username:docker}
	I1013 23:08:33.810034  589554 ssh_runner.go:195] Run: systemctl --version
	I1013 23:08:33.816658  589554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:08:33.857250  589554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:08:33.861977  589554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:08:33.862048  589554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:08:33.870175  589554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:08:33.870200  589554 start.go:495] detecting cgroup driver to use...
	I1013 23:08:33.870233  589554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:08:33.870279  589554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:08:33.886123  589554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:08:33.898818  589554 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:08:33.898878  589554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:08:33.915577  589554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:08:33.929398  589554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:08:34.072249  589554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:08:34.223230  589554 docker.go:234] disabling docker service ...
	I1013 23:08:34.223297  589554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:08:34.238972  589554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:08:34.252522  589554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:08:34.382988  589554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:08:34.523829  589554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:08:34.537841  589554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:08:34.551996  589554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:08:34.552113  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.561488  589554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:08:34.561562  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.570632  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.580478  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.589537  589554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:08:34.597526  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.606937  589554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.615273  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.625251  589554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:08:34.632791  589554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:08:34.640316  589554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:08:34.779463  589554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:08:34.947966  589554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:08:34.948040  589554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:08:34.951960  589554 start.go:563] Will wait 60s for crictl version
	I1013 23:08:34.952028  589554 ssh_runner.go:195] Run: which crictl
	I1013 23:08:34.955809  589554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:08:34.982279  589554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:08:34.982365  589554 ssh_runner.go:195] Run: crio --version
	I1013 23:08:35.012290  589554 ssh_runner.go:195] Run: crio --version
	I1013 23:08:35.048748  589554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:08:35.051769  589554 cli_runner.go:164] Run: docker network inspect pause-836584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:08:35.069009  589554 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 23:08:35.073408  589554 kubeadm.go:883] updating cluster {Name:pause-836584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-836584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:08:35.073554  589554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:08:35.073629  589554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:08:35.110900  589554 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:08:35.110925  589554 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:08:35.110986  589554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:08:35.136232  589554 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:08:35.136256  589554 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:08:35.136265  589554 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 23:08:35.136379  589554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-836584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-836584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:08:35.136469  589554 ssh_runner.go:195] Run: crio config
	I1013 23:08:35.205434  589554 cni.go:84] Creating CNI manager for ""
	I1013 23:08:35.205459  589554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:08:35.205482  589554 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:08:35.205507  589554 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-836584 NodeName:pause-836584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:08:35.205687  589554 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-836584"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:08:35.205767  589554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:08:35.214450  589554 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:08:35.214569  589554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:08:35.222361  589554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1013 23:08:35.235792  589554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:08:35.249712  589554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1013 23:08:35.262999  589554 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:08:35.266945  589554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:08:35.410521  589554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:08:35.424380  589554 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584 for IP: 192.168.85.2
	I1013 23:08:35.424404  589554 certs.go:195] generating shared ca certs ...
	I1013 23:08:35.424420  589554 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:08:35.424627  589554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:08:35.424697  589554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:08:35.424710  589554 certs.go:257] generating profile certs ...
	I1013 23:08:35.424816  589554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/client.key
	I1013 23:08:35.424905  589554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/apiserver.key.d1c58bc8
	I1013 23:08:35.424988  589554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/proxy-client.key
	I1013 23:08:35.425163  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:08:35.425216  589554 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:08:35.425234  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:08:35.425265  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:08:35.425307  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:08:35.425339  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:08:35.425401  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:08:35.426025  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:08:35.445265  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:08:35.463303  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:08:35.481486  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:08:35.500283  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 23:08:35.518947  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 23:08:35.537149  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:08:35.555241  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 23:08:35.573600  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:08:35.592023  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:08:35.609874  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:08:35.627709  589554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:08:35.641284  589554 ssh_runner.go:195] Run: openssl version
	I1013 23:08:35.647997  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:08:35.656632  589554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:08:35.660448  589554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:08:35.660571  589554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:08:35.701913  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:08:35.709923  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:08:35.718236  589554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:08:35.722190  589554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:08:35.722312  589554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:08:35.763227  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:08:35.771131  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:08:35.779877  589554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:08:35.783656  589554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:08:35.783771  589554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:08:35.826262  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:08:35.834692  589554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:08:35.838685  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:08:35.880240  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:08:35.921181  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:08:35.962600  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:08:36.014023  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:08:36.056107  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:08:36.098291  589554 kubeadm.go:400] StartCluster: {Name:pause-836584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-836584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:08:36.098409  589554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:08:36.098509  589554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:08:36.129529  589554 cri.go:89] found id: "932b463244a48a0b94454f7f8b25fcdb1321327bc02108890c047545d029ad69"
	I1013 23:08:36.129550  589554 cri.go:89] found id: "05d9739f491591937305aba50241654d336905d5b240337bdc5473cfd033d010"
	I1013 23:08:36.129555  589554 cri.go:89] found id: "aa8add4e7d15a8ac00cbc64d8c811002ca4df2c7d35010ed80b0716c6d03c5d9"
	I1013 23:08:36.129559  589554 cri.go:89] found id: "87d15b6caafad41c214e6fddcdcac2921d6badaad67a7da35289ff2b4d03b3b2"
	I1013 23:08:36.129562  589554 cri.go:89] found id: "f762c1241881f6b4b7eadb8f872ebdb0e3014eeb7279ff5eb1a0bda2e10750b0"
	I1013 23:08:36.129565  589554 cri.go:89] found id: "7a4920c5bd3032d85edce22c6b1d7e7faf5b3388d83c5cdedc99275b911fb334"
	I1013 23:08:36.129568  589554 cri.go:89] found id: "ad46aca3d42a7feeaea2b43fc17f4072c073fe9678b4c352198922c9e22c88aa"
	I1013 23:08:36.129571  589554 cri.go:89] found id: ""
	I1013 23:08:36.129622  589554 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 23:08:36.140681  589554 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:08:36Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:08:36.140769  589554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:08:36.148992  589554 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:08:36.149087  589554 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:08:36.149226  589554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:08:36.158317  589554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:08:36.159067  589554 kubeconfig.go:125] found "pause-836584" server: "https://192.168.85.2:8443"
	I1013 23:08:36.159921  589554 kapi.go:59] client config for pause-836584: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/client.key", CAFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 23:08:36.160399  589554 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1013 23:08:36.160418  589554 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1013 23:08:36.160426  589554 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1013 23:08:36.160431  589554 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1013 23:08:36.160435  589554 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1013 23:08:36.160809  589554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:08:36.169489  589554 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 23:08:36.169566  589554 kubeadm.go:601] duration metric: took 20.457709ms to restartPrimaryControlPlane
	I1013 23:08:36.169586  589554 kubeadm.go:402] duration metric: took 71.317932ms to StartCluster
	I1013 23:08:36.169602  589554 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:08:36.169678  589554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:08:36.170552  589554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:08:36.170789  589554 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:08:36.171192  589554 config.go:182] Loaded profile config "pause-836584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:08:36.171136  589554 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:08:36.176216  589554 out.go:179] * Verifying Kubernetes components...
	I1013 23:08:36.176216  589554 out.go:179] * Enabled addons: 
	I1013 23:08:36.179102  589554 addons.go:514] duration metric: took 7.925836ms for enable addons: enabled=[]
	I1013 23:08:36.179140  589554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:08:36.315785  589554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:08:36.329056  589554 node_ready.go:35] waiting up to 6m0s for node "pause-836584" to be "Ready" ...
	W1013 23:08:38.329678  589554 node_ready.go:55] error getting node "pause-836584" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/pause-836584": dial tcp 192.168.85.2:8443: connect: connection refused
	I1013 23:08:39.459904  576356 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.092703029s)
	W1013 23:08:39.459938  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1013 23:08:39.459946  576356 logs.go:123] Gathering logs for kube-apiserver [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878] ...
	I1013 23:08:39.459957  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:39.512010  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:08:39.512083  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:39.600242  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:08:39.600290  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:39.647074  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:08:39.647113  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:08:39.731003  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:08:39.731047  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:08:39.825761  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:08:39.825800  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 23:08:43.720131  589554 node_ready.go:49] node "pause-836584" is "Ready"
	I1013 23:08:43.720157  589554 node_ready.go:38] duration metric: took 7.391068457s for node "pause-836584" to be "Ready" ...
	I1013 23:08:43.720172  589554 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:08:43.720236  589554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:08:43.736558  589554 api_server.go:72] duration metric: took 7.565731408s to wait for apiserver process to appear ...
	I1013 23:08:43.736582  589554 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:08:43.736602  589554 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:08:43.745950  589554 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 23:08:43.746034  589554 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 23:08:44.237728  589554 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:08:44.246353  589554 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 23:08:44.246390  589554 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 23:08:44.736730  589554 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:08:44.745929  589554 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 23:08:44.747021  589554 api_server.go:141] control plane version: v1.34.1
	I1013 23:08:44.747115  589554 api_server.go:131] duration metric: took 1.010525037s to wait for apiserver health ...
	I1013 23:08:44.747133  589554 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:08:44.751045  589554 system_pods.go:59] 7 kube-system pods found
	I1013 23:08:44.751124  589554 system_pods.go:61] "coredns-66bc5c9577-q58xv" [4f11b874-eb7d-44fd-9044-8d0db7aa854f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:08:44.751141  589554 system_pods.go:61] "etcd-pause-836584" [1f2ebe8e-8752-4987-acab-293f657488da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:08:44.751148  589554 system_pods.go:61] "kindnet-bpjsz" [ad8e0981-fc54-4ff2-bb74-451df2da5b37] Running
	I1013 23:08:44.751158  589554 system_pods.go:61] "kube-apiserver-pause-836584" [e16e2454-7502-4521-b9b5-45a1bbc904cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:08:44.751171  589554 system_pods.go:61] "kube-controller-manager-pause-836584" [8a253953-fc36-453c-abc5-55336d81fe35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:08:44.751191  589554 system_pods.go:61] "kube-proxy-kcs2m" [88c502fa-2e77-4baf-a3be-69a82b2da46d] Running
	I1013 23:08:44.751202  589554 system_pods.go:61] "kube-scheduler-pause-836584" [540ed643-d210-4189-9035-72d23e456d08] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:08:44.751219  589554 system_pods.go:74] duration metric: took 4.078993ms to wait for pod list to return data ...
	I1013 23:08:44.751228  589554 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:08:44.754184  589554 default_sa.go:45] found service account: "default"
	I1013 23:08:44.754212  589554 default_sa.go:55] duration metric: took 2.974213ms for default service account to be created ...
	I1013 23:08:44.754223  589554 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:08:44.757773  589554 system_pods.go:86] 7 kube-system pods found
	I1013 23:08:44.757809  589554 system_pods.go:89] "coredns-66bc5c9577-q58xv" [4f11b874-eb7d-44fd-9044-8d0db7aa854f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:08:44.757820  589554 system_pods.go:89] "etcd-pause-836584" [1f2ebe8e-8752-4987-acab-293f657488da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:08:44.757847  589554 system_pods.go:89] "kindnet-bpjsz" [ad8e0981-fc54-4ff2-bb74-451df2da5b37] Running
	I1013 23:08:44.757859  589554 system_pods.go:89] "kube-apiserver-pause-836584" [e16e2454-7502-4521-b9b5-45a1bbc904cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:08:44.757867  589554 system_pods.go:89] "kube-controller-manager-pause-836584" [8a253953-fc36-453c-abc5-55336d81fe35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:08:44.757881  589554 system_pods.go:89] "kube-proxy-kcs2m" [88c502fa-2e77-4baf-a3be-69a82b2da46d] Running
	I1013 23:08:44.757888  589554 system_pods.go:89] "kube-scheduler-pause-836584" [540ed643-d210-4189-9035-72d23e456d08] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:08:44.757896  589554 system_pods.go:126] duration metric: took 3.666686ms to wait for k8s-apps to be running ...
	I1013 23:08:44.757905  589554 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:08:44.757980  589554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:08:44.771815  589554 system_svc.go:56] duration metric: took 13.899106ms WaitForService to wait for kubelet
	I1013 23:08:44.771886  589554 kubeadm.go:586] duration metric: took 8.60106475s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:08:44.771923  589554 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:08:44.775425  589554 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:08:44.775468  589554 node_conditions.go:123] node cpu capacity is 2
	I1013 23:08:44.775481  589554 node_conditions.go:105] duration metric: took 3.551661ms to run NodePressure ...
	I1013 23:08:44.775494  589554 start.go:241] waiting for startup goroutines ...
	I1013 23:08:44.775502  589554 start.go:246] waiting for cluster config update ...
	I1013 23:08:44.775511  589554 start.go:255] writing updated cluster config ...
	I1013 23:08:44.775856  589554 ssh_runner.go:195] Run: rm -f paused
	I1013 23:08:44.779984  589554 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:08:44.780654  589554 kapi.go:59] client config for pause-836584: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/client.key", CAFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 23:08:44.783957  589554 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q58xv" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:42.511093  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1013 23:08:46.790702  589554 pod_ready.go:104] pod "coredns-66bc5c9577-q58xv" is not "Ready", error: <nil>
	W1013 23:08:49.290111  589554 pod_ready.go:104] pod "coredns-66bc5c9577-q58xv" is not "Ready", error: <nil>
	I1013 23:08:47.512031  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1013 23:08:47.512096  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:47.512164  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:47.540436  576356 cri.go:89] found id: "15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:08:47.540457  576356 cri.go:89] found id: "a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:47.540462  576356 cri.go:89] found id: ""
	I1013 23:08:47.540469  576356 logs.go:282] 2 containers: [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684 a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878]
	I1013 23:08:47.540527  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:47.544392  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:47.548124  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:08:47.548194  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:08:47.574835  576356 cri.go:89] found id: ""
	I1013 23:08:47.574860  576356 logs.go:282] 0 containers: []
	W1013 23:08:47.574868  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:08:47.574874  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:08:47.574930  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:08:47.602049  576356 cri.go:89] found id: ""
	I1013 23:08:47.602074  576356 logs.go:282] 0 containers: []
	W1013 23:08:47.602084  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:08:47.602091  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:08:47.602150  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:08:47.632732  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:47.632752  576356 cri.go:89] found id: ""
	I1013 23:08:47.632761  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:08:47.632818  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:47.636923  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:08:47.637002  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:08:47.663488  576356 cri.go:89] found id: ""
	I1013 23:08:47.663514  576356 logs.go:282] 0 containers: []
	W1013 23:08:47.663523  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:08:47.663530  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:08:47.663588  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:08:47.691780  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:47.691813  576356 cri.go:89] found id: ""
	I1013 23:08:47.691822  576356 logs.go:282] 1 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17]
	I1013 23:08:47.691883  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:47.695802  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:08:47.695879  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:08:47.724693  576356 cri.go:89] found id: ""
	I1013 23:08:47.724720  576356 logs.go:282] 0 containers: []
	W1013 23:08:47.724728  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:08:47.724735  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:08:47.724794  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:08:47.762441  576356 cri.go:89] found id: ""
	I1013 23:08:47.762466  576356 logs.go:282] 0 containers: []
	W1013 23:08:47.762474  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:08:47.762489  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:08:47.762503  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:47.835568  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:08:47.835606  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:47.867733  576356 logs.go:123] Gathering logs for kube-apiserver [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878] ...
	I1013 23:08:47.867761  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:47.904517  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:08:47.904553  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:08:47.968153  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:08:47.968190  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:08:48.005363  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:08:48.005397  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 23:08:48.120399  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:08:48.120435  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:08:48.136891  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:08:48.136932  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1013 23:08:51.030994  576356 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.894042568s)
	W1013 23:08:51.031046  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:37388->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:37388->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1013 23:08:51.031054  576356 logs.go:123] Gathering logs for kube-apiserver [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684] ...
	I1013 23:08:51.031109  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:08:51.290127  589554 pod_ready.go:94] pod "coredns-66bc5c9577-q58xv" is "Ready"
	I1013 23:08:51.290151  589554 pod_ready.go:86] duration metric: took 6.506165311s for pod "coredns-66bc5c9577-q58xv" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:51.294193  589554 pod_ready.go:83] waiting for pod "etcd-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 23:08:53.300993  589554 pod_ready.go:104] pod "etcd-pause-836584" is not "Ready", error: <nil>
	I1013 23:08:55.799175  589554 pod_ready.go:94] pod "etcd-pause-836584" is "Ready"
	I1013 23:08:55.799208  589554 pod_ready.go:86] duration metric: took 4.504990747s for pod "etcd-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:55.801370  589554 pod_ready.go:83] waiting for pod "kube-apiserver-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:55.806164  589554 pod_ready.go:94] pod "kube-apiserver-pause-836584" is "Ready"
	I1013 23:08:55.806193  589554 pod_ready.go:86] duration metric: took 4.79554ms for pod "kube-apiserver-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:55.808559  589554 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:55.813040  589554 pod_ready.go:94] pod "kube-controller-manager-pause-836584" is "Ready"
	I1013 23:08:55.813068  589554 pod_ready.go:86] duration metric: took 4.485844ms for pod "kube-controller-manager-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:55.815550  589554 pod_ready.go:83] waiting for pod "kube-proxy-kcs2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:53.568705  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:08:53.569209  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 23:08:53.569267  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:53.569324  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:53.596366  576356 cri.go:89] found id: "15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:08:53.596385  576356 cri.go:89] found id: ""
	I1013 23:08:53.596395  576356 logs.go:282] 1 containers: [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684]
	I1013 23:08:53.596452  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:53.600146  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:08:53.600227  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:08:53.628792  576356 cri.go:89] found id: ""
	I1013 23:08:53.628818  576356 logs.go:282] 0 containers: []
	W1013 23:08:53.628827  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:08:53.628834  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:08:53.628893  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:08:53.661296  576356 cri.go:89] found id: ""
	I1013 23:08:53.661319  576356 logs.go:282] 0 containers: []
	W1013 23:08:53.661327  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:08:53.661334  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:08:53.661396  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:08:53.692678  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:53.692704  576356 cri.go:89] found id: ""
	I1013 23:08:53.692713  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:08:53.692767  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:53.696444  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:08:53.696546  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:08:53.723295  576356 cri.go:89] found id: ""
	I1013 23:08:53.723321  576356 logs.go:282] 0 containers: []
	W1013 23:08:53.723338  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:08:53.723363  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:08:53.723445  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:08:53.758927  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:53.758989  576356 cri.go:89] found id: ""
	I1013 23:08:53.759011  576356 logs.go:282] 1 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17]
	I1013 23:08:53.759120  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:53.763365  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:08:53.763463  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:08:53.789306  576356 cri.go:89] found id: ""
	I1013 23:08:53.789330  576356 logs.go:282] 0 containers: []
	W1013 23:08:53.789339  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:08:53.789345  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:08:53.789409  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:08:53.818961  576356 cri.go:89] found id: ""
	I1013 23:08:53.818986  576356 logs.go:282] 0 containers: []
	W1013 23:08:53.818995  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:08:53.819004  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:08:53.819017  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:53.877522  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:08:53.877559  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:53.903631  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:08:53.903660  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:08:53.964689  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:08:53.964725  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:08:53.995510  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:08:53.995536  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 23:08:54.127411  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:08:54.127449  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:08:54.143924  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:08:54.143958  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 23:08:54.218410  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 23:08:54.218478  576356 logs.go:123] Gathering logs for kube-apiserver [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684] ...
	I1013 23:08:54.218505  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:08:55.997875  589554 pod_ready.go:94] pod "kube-proxy-kcs2m" is "Ready"
	I1013 23:08:55.997904  589554 pod_ready.go:86] duration metric: took 182.327999ms for pod "kube-proxy-kcs2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:56.198146  589554 pod_ready.go:83] waiting for pod "kube-scheduler-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:57.797419  589554 pod_ready.go:94] pod "kube-scheduler-pause-836584" is "Ready"
	I1013 23:08:57.797445  589554 pod_ready.go:86] duration metric: took 1.59927148s for pod "kube-scheduler-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:57.797457  589554 pod_ready.go:40] duration metric: took 13.01744102s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:08:57.849089  589554 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:08:57.852493  589554 out.go:179] * Done! kubectl is now configured to use "pause-836584" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.003478073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.011203232Z" level=info msg="Created container 1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca: kube-system/kindnet-bpjsz/kindnet-cni" id=9e446384-1086-4101-8b59-7b19444680ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.013510227Z" level=info msg="Starting container: 1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca" id=6a280df7-2541-4add-8ce3-3d72ffd9e5a7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.01594521Z" level=info msg="Started container" PID=2361 containerID=1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca description=kube-system/kindnet-bpjsz/kindnet-cni id=6a280df7-2541-4add-8ce3-3d72ffd9e5a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=df2b5156466d11e06aacfc2d7317ffc1388da1b351a19278b22701d925027df4
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.034185573Z" level=info msg="Created container 82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec: kube-system/coredns-66bc5c9577-q58xv/coredns" id=c7a4db3a-504f-4b76-8d10-a2fe826f7a5b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.036280182Z" level=info msg="Starting container: 82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec" id=cefe4b5f-810b-44f6-9681-16bf4cc53475 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.037874158Z" level=info msg="Started container" PID=2369 containerID=82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec description=kube-system/coredns-66bc5c9577-q58xv/coredns id=cefe4b5f-810b-44f6-9681-16bf4cc53475 name=/runtime.v1.RuntimeService/StartContainer sandboxID=47fad4b7d78b0959ee943b82956985e590d6ae16319ace36406fd33438e21e04
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.496450909Z" level=info msg="Created container f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62: kube-system/kube-proxy-kcs2m/kube-proxy" id=e2017455-8108-440b-88c4-f99083ea48ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.497879924Z" level=info msg="Starting container: f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62" id=f4cd814e-48f7-41cc-9541-5221dbd60175 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.501016702Z" level=info msg="Started container" PID=2356 containerID=f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62 description=kube-system/kube-proxy-kcs2m/kube-proxy id=f4cd814e-48f7-41cc-9541-5221dbd60175 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d11765ef364d8c8e4cd2fb273cc62bca12ef79d529e97221b2f95bc599fcf2b
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.406179093Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.409728465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.409762909Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.409788804Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.413099338Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.41313448Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.41315811Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.416383387Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.416418594Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.416442946Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.42023146Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.420266068Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.42029111Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.423544521Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.423578654Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	82b32b0378d02       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   47fad4b7d78b0       coredns-66bc5c9577-q58xv               kube-system
	1b5152b89b473       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   df2b5156466d1       kindnet-bpjsz                          kube-system
	f6343d4559199       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   6d11765ef364d       kube-proxy-kcs2m                       kube-system
	58af9ab254a42       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   90bc0139660f4       etcd-pause-836584                      kube-system
	cebeaada2da3d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   16d57ec78898d       kube-scheduler-pause-836584            kube-system
	cc7bd33116bc4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   1a505af9bc53c       kube-controller-manager-pause-836584   kube-system
	847763d2657e4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   5e69143022fa9       kube-apiserver-pause-836584            kube-system
	932b463244a48       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   38 seconds ago       Exited              coredns                   0                   47fad4b7d78b0       coredns-66bc5c9577-q58xv               kube-system
	05d9739f49159       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   6d11765ef364d       kube-proxy-kcs2m                       kube-system
	aa8add4e7d15a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   df2b5156466d1       kindnet-bpjsz                          kube-system
	87d15b6caafad       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   16d57ec78898d       kube-scheduler-pause-836584            kube-system
	f762c1241881f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   5e69143022fa9       kube-apiserver-pause-836584            kube-system
	7a4920c5bd303       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   90bc0139660f4       etcd-pause-836584                      kube-system
	ad46aca3d42a7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   1a505af9bc53c       kube-controller-manager-pause-836584   kube-system
	
	
	==> coredns [82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36789 - 53866 "HINFO IN 3349287790880305469.4701604218403196471. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031464924s
	
	
	==> coredns [932b463244a48a0b94454f7f8b25fcdb1321327bc02108890c047545d029ad69] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44100 - 21424 "HINFO IN 5086195233547606467.5478536643543823063. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020337465s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-836584
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-836584
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=pause-836584
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_07_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:07:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-836584
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:08:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:08:48 +0000   Mon, 13 Oct 2025 23:07:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:08:48 +0000   Mon, 13 Oct 2025 23:07:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:08:48 +0000   Mon, 13 Oct 2025 23:07:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:08:48 +0000   Mon, 13 Oct 2025 23:08:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-836584
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                e5dc741b-862c-4ecb-bae2-b81fa7a53143
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-q58xv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     80s
	  kube-system                 etcd-pause-836584                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         85s
	  kube-system                 kindnet-bpjsz                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-pause-836584             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-836584    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-kcs2m                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-836584             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 79s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node pause-836584 status is now: NodeHasSufficientMemory
	  Normal   Starting                 92s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 92s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node pause-836584 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     92s (x8 over 92s)  kubelet          Node pause-836584 status is now: NodeHasSufficientPID
	  Normal   Starting                 85s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 85s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s                kubelet          Node pause-836584 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s                kubelet          Node pause-836584 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s                kubelet          Node pause-836584 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                node-controller  Node pause-836584 event: Registered Node pause-836584 in Controller
	  Normal   NodeReady                39s                kubelet          Node pause-836584 status is now: NodeReady
	  Warning  ContainerGCFailed        25s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           15s                node-controller  Node pause-836584 event: Registered Node pause-836584 in Controller
	
	
	==> dmesg <==
	[ +36.310157] overlayfs: idmapped layers are currently not supported
	[Oct13 22:41] overlayfs: idmapped layers are currently not supported
	[Oct13 22:42] overlayfs: idmapped layers are currently not supported
	[  +4.001885] overlayfs: idmapped layers are currently not supported
	[Oct13 22:43] overlayfs: idmapped layers are currently not supported
	[Oct13 22:44] overlayfs: idmapped layers are currently not supported
	[Oct13 22:45] overlayfs: idmapped layers are currently not supported
	[Oct13 22:50] overlayfs: idmapped layers are currently not supported
	[Oct13 22:51] overlayfs: idmapped layers are currently not supported
	[Oct13 22:52] overlayfs: idmapped layers are currently not supported
	[Oct13 22:53] overlayfs: idmapped layers are currently not supported
	[Oct13 22:54] overlayfs: idmapped layers are currently not supported
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [58af9ab254a42fc1621d3c20990e94448e872636dcb69301be14e6dd6a30eeac] <==
	{"level":"warn","ts":"2025-10-13T23:08:41.854238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:41.872085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:41.894339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:41.927273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:41.951296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:41.981594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.012191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.066380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.095614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.172087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.187165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.252696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.256868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.283356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.300077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.318726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.338310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.355169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.372430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.389508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.420806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.455953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.466844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.483946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.598812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34572","server-name":"","error":"EOF"}
	
	
	==> etcd [7a4920c5bd3032d85edce22c6b1d7e7faf5b3388d83c5cdedc99275b911fb334] <==
	{"level":"warn","ts":"2025-10-13T23:07:32.525358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.540506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.562304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.587633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.617060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.628604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.722873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49656","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T23:08:28.128096Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T23:08:28.128146Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-836584","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-13T23:08:28.128248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T23:08:28.274687Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-13T23:08:28.274846Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T23:08:28.274886Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T23:08:28.274896Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-13T23:08:28.274861Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T23:08:28.274952Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T23:08:28.274967Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T23:08:28.274974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T23:08:28.274996Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-13T23:08:28.275066Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-13T23:08:28.275064Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T23:08:28.278207Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-13T23:08:28.278283Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T23:08:28.278314Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T23:08:28.278322Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-836584","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 23:09:01 up  2:51,  0 user,  load average: 1.66, 2.23, 2.05
	Linux pause-836584 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca] <==
	I1013 23:08:39.127780       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:08:39.128160       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:08:39.128353       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:08:39.128394       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:08:39.128458       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:08:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:08:39.406282       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:08:39.406384       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:08:39.406476       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:08:39.410494       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 23:08:43.809671       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:08:43.809708       1 metrics.go:72] Registering metrics
	I1013 23:08:43.809778       1 controller.go:711] "Syncing nftables rules"
	I1013 23:08:49.405833       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:08:49.405892       1 main.go:301] handling current node
	I1013 23:08:59.406428       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:08:59.406495       1 main.go:301] handling current node
	
	
	==> kindnet [aa8add4e7d15a8ac00cbc64d8c811002ca4df2c7d35010ed80b0716c6d03c5d9] <==
	I1013 23:07:42.207895       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:07:42.208439       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:07:42.208647       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:07:42.208696       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:07:42.208736       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:07:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:07:42.409223       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:07:42.409301       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:07:42.409336       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:07:42.409478       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:08:12.410037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:08:12.410036       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 23:08:12.410259       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 23:08:12.503791       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1013 23:08:13.910437       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:08:13.910479       1 metrics.go:72] Registering metrics
	I1013 23:08:13.910553       1 controller.go:711] "Syncing nftables rules"
	I1013 23:08:22.415202       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:08:22.415262       1 main.go:301] handling current node
	
	
	==> kube-apiserver [847763d2657e4ac8786f744228c853320d4ec0e12d75ac4e02a1aa292b61ebbd] <==
	I1013 23:08:43.628016       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 23:08:43.648193       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:08:43.663792       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 23:08:43.698151       1 aggregator.go:171] initial CRD sync complete...
	I1013 23:08:43.698287       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 23:08:43.698322       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 23:08:43.698371       1 cache.go:39] Caches are synced for autoregister controller
	I1013 23:08:43.698586       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1013 23:08:43.699700       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:08:43.700602       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 23:08:43.703989       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 23:08:43.704598       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 23:08:43.704620       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 23:08:43.704722       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 23:08:43.704811       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 23:08:43.704849       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 23:08:43.713967       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 23:08:43.716280       1 policy_source.go:240] refreshing policies
	I1013 23:08:43.754304       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:08:44.319732       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:08:45.514938       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:08:46.996991       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 23:08:47.155389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:08:47.198430       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:08:47.299554       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [f762c1241881f6b4b7eadb8f872ebdb0e3014eeb7279ff5eb1a0bda2e10750b0] <==
	W1013 23:08:28.149795       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.149902       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150001       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150108       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150234       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150569       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150711       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150829       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150934       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151037       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151312       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151629       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151729       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151785       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151864       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151917       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151971       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152021       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152074       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152130       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152178       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152223       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152271       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152323       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152370       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [ad46aca3d42a7feeaea2b43fc17f4072c073fe9678b4c352198922c9e22c88aa] <==
	I1013 23:07:40.826524       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 23:07:40.839494       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 23:07:40.827061       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 23:07:40.828944       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 23:07:40.826907       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:07:40.826968       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 23:07:40.827023       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 23:07:40.853938       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:07:40.854126       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 23:07:40.854200       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 23:07:40.854274       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-836584"
	I1013 23:07:40.854311       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 23:07:40.854336       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 23:07:40.861675       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:07:40.863776       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 23:07:40.863907       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:07:40.864288       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:07:40.864632       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 23:07:40.879004       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 23:07:40.879960       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 23:07:40.879174       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 23:07:40.887995       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 23:07:40.888361       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 23:07:40.906763       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 23:08:25.862387       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [cc7bd33116bc4acc38466c7d562ff96f74af865dd7aa4909cb16a23f999c0b25] <==
	I1013 23:08:46.946464       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:08:46.948290       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 23:08:46.948351       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 23:08:46.948373       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 23:08:46.948379       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 23:08:46.948385       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 23:08:46.948464       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 23:08:46.948553       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 23:08:46.953363       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:08:46.955488       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:08:46.967964       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:08:46.974164       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 23:08:46.976506       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 23:08:46.978722       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 23:08:46.984970       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 23:08:46.989527       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 23:08:46.989933       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:08:46.989999       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:08:46.990193       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:08:46.990678       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 23:08:46.990840       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 23:08:46.990873       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 23:08:46.999853       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:08:46.999877       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 23:08:46.999886       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [05d9739f491591937305aba50241654d336905d5b240337bdc5473cfd033d010] <==
	I1013 23:07:42.181818       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:07:42.275228       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:07:42.375759       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:07:42.375890       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 23:07:42.376001       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:07:42.397075       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:07:42.397130       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:07:42.401626       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:07:42.401958       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:07:42.402033       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:07:42.415276       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:07:42.415361       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:07:42.415412       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:07:42.415441       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:07:42.417497       1 config.go:309] "Starting node config controller"
	I1013 23:07:42.420385       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:07:42.420402       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:07:42.418093       1 config.go:200] "Starting service config controller"
	I1013 23:07:42.420412       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:07:42.516089       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 23:07:42.516089       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:07:42.521311       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62] <==
	I1013 23:08:39.777416       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:08:41.700276       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:08:43.721851       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:08:43.721946       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 23:08:43.722051       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:08:43.769795       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:08:43.769859       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:08:43.778511       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:08:43.778825       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:08:43.779003       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:08:43.780337       1 config.go:200] "Starting service config controller"
	I1013 23:08:43.780400       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:08:43.780453       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:08:43.780481       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:08:43.780517       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:08:43.780545       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:08:43.781241       1 config.go:309] "Starting node config controller"
	I1013 23:08:43.781295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:08:43.781325       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:08:43.881085       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:08:43.881274       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:08:43.881297       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [87d15b6caafad41c214e6fddcdcac2921d6badaad67a7da35289ff2b4d03b3b2] <==
	E1013 23:07:34.432637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1013 23:07:34.438784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 23:07:34.438953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 23:07:34.439045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 23:07:34.439403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 23:07:34.441413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 23:07:34.441527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 23:07:34.441614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 23:07:34.441697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 23:07:34.441781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 23:07:34.441877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 23:07:34.441989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 23:07:34.442058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 23:07:34.442127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 23:07:34.442229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 23:07:34.442543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 23:07:34.442655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 23:07:34.442706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1013 23:07:35.723717       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:08:28.136819       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 23:08:28.136918       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 23:08:28.136941       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 23:08:28.136968       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:08:28.137022       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 23:08:28.137036       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cebeaada2da3d992b3ba1b12610ac388621e7bd8f90e348a4c320078cffa1b8c] <==
	I1013 23:08:43.030005       1 serving.go:386] Generated self-signed cert in-memory
	I1013 23:08:44.193229       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 23:08:44.193341       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:08:44.199173       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:08:44.199258       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 23:08:44.199358       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 23:08:44.199419       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 23:08:44.201817       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:08:44.201913       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:08:44.201962       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:08:44.202012       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:08:44.300168       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 23:08:44.302590       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:08:44.302588       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.883755    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d1af56e764d91efdb60c316f3e92a2cb" pod="kube-system/etcd-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.884073    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="db44d70b208947530121dd46b1b98199" pod="kube-system/kube-apiserver-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.884353    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6c55ceb518647e4a5902987f8b8c68dd" pod="kube-system/kube-controller-manager-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: I1013 23:08:38.899688    1292 scope.go:117] "RemoveContainer" containerID="aa8add4e7d15a8ac00cbc64d8c811002ca4df2c7d35010ed80b0716c6d03c5d9"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.900226    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6c55ceb518647e4a5902987f8b8c68dd" pod="kube-system/kube-controller-manager-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.900666    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-bpjsz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ad8e0981-fc54-4ff2-bb74-451df2da5b37" pod="kube-system/kindnet-bpjsz"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.900964    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcs2m\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="88c502fa-2e77-4baf-a3be-69a82b2da46d" pod="kube-system/kube-proxy-kcs2m"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.901232    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d1ab5559a60738a3557e810e33ac5fbd" pod="kube-system/kube-scheduler-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.901483    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d1af56e764d91efdb60c316f3e92a2cb" pod="kube-system/etcd-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.901742    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="db44d70b208947530121dd46b1b98199" pod="kube-system/kube-apiserver-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: I1013 23:08:38.925045    1292 scope.go:117] "RemoveContainer" containerID="932b463244a48a0b94454f7f8b25fcdb1321327bc02108890c047545d029ad69"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.925604    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d1ab5559a60738a3557e810e33ac5fbd" pod="kube-system/kube-scheduler-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.925766    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d1af56e764d91efdb60c316f3e92a2cb" pod="kube-system/etcd-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.925911    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="db44d70b208947530121dd46b1b98199" pod="kube-system/kube-apiserver-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.926067    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6c55ceb518647e4a5902987f8b8c68dd" pod="kube-system/kube-controller-manager-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.926209    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-bpjsz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ad8e0981-fc54-4ff2-bb74-451df2da5b37" pod="kube-system/kindnet-bpjsz"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.926375    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcs2m\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="88c502fa-2e77-4baf-a3be-69a82b2da46d" pod="kube-system/kube-proxy-kcs2m"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.926980    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q58xv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4f11b874-eb7d-44fd-9044-8d0db7aa854f" pod="kube-system/coredns-66bc5c9577-q58xv"
	Oct 13 23:08:43 pause-836584 kubelet[1292]: E1013 23:08:43.381701    1292 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-836584\" is forbidden: User \"system:node:pause-836584\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-836584' and this object" podUID="d1ab5559a60738a3557e810e33ac5fbd" pod="kube-system/kube-scheduler-pause-836584"
	Oct 13 23:08:43 pause-836584 kubelet[1292]: E1013 23:08:43.460099    1292 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-836584\" is forbidden: User \"system:node:pause-836584\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-836584' and this object" podUID="d1af56e764d91efdb60c316f3e92a2cb" pod="kube-system/etcd-pause-836584"
	Oct 13 23:08:43 pause-836584 kubelet[1292]: E1013 23:08:43.587301    1292 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-836584\" is forbidden: User \"system:node:pause-836584\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-836584' and this object" podUID="db44d70b208947530121dd46b1b98199" pod="kube-system/kube-apiserver-pause-836584"
	Oct 13 23:08:46 pause-836584 kubelet[1292]: W1013 23:08:46.823362    1292 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 13 23:08:58 pause-836584 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:08:58 pause-836584 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:08:58 pause-836584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-836584 -n pause-836584
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-836584 -n pause-836584: exit status 2 (400.391613ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-836584 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-836584
helpers_test.go:243: (dbg) docker inspect pause-836584:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9",
	        "Created": "2025-10-13T23:07:10.814555889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 585357,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:07:10.876003415Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9/hostname",
	        "HostsPath": "/var/lib/docker/containers/0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9/hosts",
	        "LogPath": "/var/lib/docker/containers/0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9/0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9-json.log",
	        "Name": "/pause-836584",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-836584:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-836584",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0c2a622a7ec4af3d1fc27eb21d90999694847ed76ae1021c449f872bfc90ffa9",
	                "LowerDir": "/var/lib/docker/overlay2/cb4feadd2cb323f3709440ab60018a6853a4c29a060536f792a4a814f3c7078a-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb4feadd2cb323f3709440ab60018a6853a4c29a060536f792a4a814f3c7078a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb4feadd2cb323f3709440ab60018a6853a4c29a060536f792a4a814f3c7078a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb4feadd2cb323f3709440ab60018a6853a4c29a060536f792a4a814f3c7078a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-836584",
	                "Source": "/var/lib/docker/volumes/pause-836584/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-836584",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-836584",
	                "name.minikube.sigs.k8s.io": "pause-836584",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5cd0182f669995fd9c8d514126c8de834d8f2f4400daa4f90e2ebe3e46891a4b",
	            "SandboxKey": "/var/run/docker/netns/5cd0182f6699",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-836584": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:e8:c4:34:82:a4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "786fd759c03007698ab28769589888093299db9eb6fb29c4eea9eadee6b21ed9",
	                    "EndpointID": "7a0e9b546378a2e7b64ed42a91fdb70f1020f2c92620441a3b9d022e01574d48",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-836584",
	                        "0c2a622a7ec4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-836584 -n pause-836584
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-836584 -n pause-836584: exit status 2 (356.770106ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-836584 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-836584 logs -n 25: (1.545135676s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-762540 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:03 UTC │ 13 Oct 25 23:03 UTC │
	│ start   │ -p missing-upgrade-354983 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-354983    │ jenkins │ v1.32.0 │ 13 Oct 25 23:03 UTC │ 13 Oct 25 23:04 UTC │
	│ start   │ -p NoKubernetes-762540 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:03 UTC │ 13 Oct 25 23:04 UTC │
	│ delete  │ -p NoKubernetes-762540                                                                                                                   │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:04 UTC │
	│ start   │ -p missing-upgrade-354983 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-354983    │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:05 UTC │
	│ start   │ -p NoKubernetes-762540 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:04 UTC │
	│ ssh     │ -p NoKubernetes-762540 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │                     │
	│ stop    │ -p NoKubernetes-762540                                                                                                                   │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:04 UTC │
	│ start   │ -p NoKubernetes-762540 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:04 UTC │
	│ ssh     │ -p NoKubernetes-762540 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │                     │
	│ delete  │ -p NoKubernetes-762540                                                                                                                   │ NoKubernetes-762540       │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:04 UTC │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:04 UTC │ 13 Oct 25 23:05 UTC │
	│ delete  │ -p missing-upgrade-354983                                                                                                                │ missing-upgrade-354983    │ jenkins │ v1.37.0 │ 13 Oct 25 23:05 UTC │ 13 Oct 25 23:05 UTC │
	│ start   │ -p stopped-upgrade-633601 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-633601    │ jenkins │ v1.32.0 │ 13 Oct 25 23:05 UTC │ 13 Oct 25 23:05 UTC │
	│ stop    │ -p kubernetes-upgrade-211312                                                                                                             │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:05 UTC │ 13 Oct 25 23:05 UTC │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:05 UTC │                     │
	│ stop    │ stopped-upgrade-633601 stop                                                                                                              │ stopped-upgrade-633601    │ jenkins │ v1.32.0 │ 13 Oct 25 23:05 UTC │ 13 Oct 25 23:05 UTC │
	│ start   │ -p stopped-upgrade-633601 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-633601    │ jenkins │ v1.37.0 │ 13 Oct 25 23:05 UTC │ 13 Oct 25 23:06 UTC │
	│ delete  │ -p stopped-upgrade-633601                                                                                                                │ stopped-upgrade-633601    │ jenkins │ v1.37.0 │ 13 Oct 25 23:06 UTC │ 13 Oct 25 23:06 UTC │
	│ start   │ -p running-upgrade-276330 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-276330    │ jenkins │ v1.32.0 │ 13 Oct 25 23:06 UTC │ 13 Oct 25 23:06 UTC │
	│ start   │ -p running-upgrade-276330 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-276330    │ jenkins │ v1.37.0 │ 13 Oct 25 23:06 UTC │ 13 Oct 25 23:07 UTC │
	│ delete  │ -p running-upgrade-276330                                                                                                                │ running-upgrade-276330    │ jenkins │ v1.37.0 │ 13 Oct 25 23:07 UTC │ 13 Oct 25 23:07 UTC │
	│ start   │ -p pause-836584 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-836584              │ jenkins │ v1.37.0 │ 13 Oct 25 23:07 UTC │ 13 Oct 25 23:08 UTC │
	│ start   │ -p pause-836584 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-836584              │ jenkins │ v1.37.0 │ 13 Oct 25 23:08 UTC │ 13 Oct 25 23:08 UTC │
	│ pause   │ -p pause-836584 --alsologtostderr -v=5                                                                                                   │ pause-836584              │ jenkins │ v1.37.0 │ 13 Oct 25 23:08 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:08:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:08:25.826112  589554 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:08:25.826686  589554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:08:25.826720  589554 out.go:374] Setting ErrFile to fd 2...
	I1013 23:08:25.826742  589554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:08:25.827043  589554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:08:25.827469  589554 out.go:368] Setting JSON to false
	I1013 23:08:25.828484  589554 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10242,"bootTime":1760386664,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:08:25.828582  589554 start.go:141] virtualization:  
	I1013 23:08:25.831765  589554 out.go:179] * [pause-836584] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:08:25.835702  589554 notify.go:220] Checking for updates...
	I1013 23:08:25.836577  589554 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:08:25.840252  589554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:08:25.843249  589554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:08:25.846151  589554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:08:25.849091  589554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:08:25.853090  589554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:08:25.856593  589554 config.go:182] Loaded profile config "pause-836584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:08:25.857317  589554 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:08:25.888817  589554 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:08:25.888944  589554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:08:25.976172  589554 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 23:08:25.965913228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:08:25.976276  589554 docker.go:318] overlay module found
	I1013 23:08:25.979744  589554 out.go:179] * Using the docker driver based on existing profile
	I1013 23:08:25.982676  589554 start.go:305] selected driver: docker
	I1013 23:08:25.982693  589554 start.go:925] validating driver "docker" against &{Name:pause-836584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-836584 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:08:25.982870  589554 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:08:25.982986  589554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:08:26.083296  589554 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 23:08:26.072163796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:08:26.083708  589554 cni.go:84] Creating CNI manager for ""
	I1013 23:08:26.083767  589554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:08:26.083812  589554 start.go:349] cluster config:
	{Name:pause-836584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-836584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:08:26.087011  589554 out.go:179] * Starting "pause-836584" primary control-plane node in "pause-836584" cluster
	I1013 23:08:26.089777  589554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:08:26.092758  589554 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:08:26.095572  589554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:08:26.095628  589554 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:08:26.095640  589554 cache.go:58] Caching tarball of preloaded images
	I1013 23:08:26.095728  589554 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:08:26.095738  589554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:08:26.095880  589554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/config.json ...
	I1013 23:08:26.096182  589554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:08:26.125358  589554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:08:26.125378  589554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:08:26.125398  589554 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:08:26.125420  589554 start.go:360] acquireMachinesLock for pause-836584: {Name:mka7814e49a7b0446c04d5da0315da29b4254871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:08:26.125473  589554 start.go:364] duration metric: took 37.956µs to acquireMachinesLock for "pause-836584"
	I1013 23:08:26.125492  589554 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:08:26.125502  589554 fix.go:54] fixHost starting: 
	I1013 23:08:26.125775  589554 cli_runner.go:164] Run: docker container inspect pause-836584 --format={{.State.Status}}
	I1013 23:08:26.158350  589554 fix.go:112] recreateIfNeeded on pause-836584: state=Running err=<nil>
	W1013 23:08:26.158383  589554 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 23:08:22.465707  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:08:22.466144  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 23:08:22.466219  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:22.466302  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:22.492291  576356 cri.go:89] found id: "a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:22.492310  576356 cri.go:89] found id: ""
	I1013 23:08:22.492318  576356 logs.go:282] 1 containers: [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878]
	I1013 23:08:22.492374  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:22.497453  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:08:22.497573  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:08:22.524860  576356 cri.go:89] found id: ""
	I1013 23:08:22.524940  576356 logs.go:282] 0 containers: []
	W1013 23:08:22.524954  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:08:22.524962  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:08:22.525018  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:08:22.550687  576356 cri.go:89] found id: ""
	I1013 23:08:22.550715  576356 logs.go:282] 0 containers: []
	W1013 23:08:22.550725  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:08:22.550732  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:08:22.550844  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:08:22.578955  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:22.579031  576356 cri.go:89] found id: ""
	I1013 23:08:22.579055  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:08:22.579160  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:22.583483  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:08:22.583554  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:08:22.610385  576356 cri.go:89] found id: ""
	I1013 23:08:22.610411  576356 logs.go:282] 0 containers: []
	W1013 23:08:22.610420  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:08:22.610426  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:08:22.610541  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:08:22.637325  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:22.637345  576356 cri.go:89] found id: "e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53"
	I1013 23:08:22.637351  576356 cri.go:89] found id: ""
	I1013 23:08:22.637386  576356 logs.go:282] 2 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17 e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53]
	I1013 23:08:22.637445  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:22.641125  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:22.644584  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:08:22.644721  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:08:22.671130  576356 cri.go:89] found id: ""
	I1013 23:08:22.671161  576356 logs.go:282] 0 containers: []
	W1013 23:08:22.671170  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:08:22.671177  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:08:22.671233  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:08:22.698431  576356 cri.go:89] found id: ""
	I1013 23:08:22.698457  576356 logs.go:282] 0 containers: []
	W1013 23:08:22.698467  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:08:22.698480  576356 logs.go:123] Gathering logs for kube-controller-manager [e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53] ...
	I1013 23:08:22.698493  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53"
	I1013 23:08:22.725777  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:08:22.725802  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:08:22.781676  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:08:22.781712  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:08:22.815315  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:08:22.815344  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 23:08:22.930391  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:08:22.930467  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:08:22.948545  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:08:22.948616  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 23:08:23.033366  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 23:08:23.033442  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:08:23.033471  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:23.089528  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:08:23.089566  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:23.128827  576356 logs.go:123] Gathering logs for kube-apiserver [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878] ...
	I1013 23:08:23.128860  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:25.681056  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:08:25.681447  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 23:08:25.681495  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:25.681555  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:25.724111  576356 cri.go:89] found id: "a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:25.724131  576356 cri.go:89] found id: ""
	I1013 23:08:25.724139  576356 logs.go:282] 1 containers: [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878]
	I1013 23:08:25.724195  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:25.730940  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:08:25.731013  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:08:25.783070  576356 cri.go:89] found id: ""
	I1013 23:08:25.783182  576356 logs.go:282] 0 containers: []
	W1013 23:08:25.783191  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:08:25.783203  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:08:25.783258  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:08:25.812938  576356 cri.go:89] found id: ""
	I1013 23:08:25.812961  576356 logs.go:282] 0 containers: []
	W1013 23:08:25.812970  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:08:25.812976  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:08:25.813035  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:08:25.846941  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:25.846964  576356 cri.go:89] found id: ""
	I1013 23:08:25.846972  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:08:25.847026  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:25.851218  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:08:25.851286  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:08:25.890851  576356 cri.go:89] found id: ""
	I1013 23:08:25.890871  576356 logs.go:282] 0 containers: []
	W1013 23:08:25.890879  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:08:25.890885  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:08:25.890947  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:08:25.935248  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:25.935265  576356 cri.go:89] found id: "e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53"
	I1013 23:08:25.935270  576356 cri.go:89] found id: ""
	I1013 23:08:25.935277  576356 logs.go:282] 2 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17 e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53]
	I1013 23:08:25.935336  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:25.939751  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:25.951300  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:08:25.951583  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:08:25.983229  576356 cri.go:89] found id: ""
	I1013 23:08:25.983247  576356 logs.go:282] 0 containers: []
	W1013 23:08:25.983255  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:08:25.983261  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:08:25.983308  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:08:26.024623  576356 cri.go:89] found id: ""
	I1013 23:08:26.024646  576356 logs.go:282] 0 containers: []
	W1013 23:08:26.024655  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:08:26.024668  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:08:26.024681  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:08:26.095940  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:08:26.095966  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:08:26.137198  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:08:26.137221  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 23:08:26.292658  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:08:26.292732  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:08:26.310211  576356 logs.go:123] Gathering logs for kube-controller-manager [e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53] ...
	I1013 23:08:26.310289  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1e1724b2c745762c1396d36e04a1f6fb1402b3209223c3ae6d6492438951d53"
	I1013 23:08:26.338220  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:08:26.338246  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 23:08:26.425765  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 23:08:26.425786  576356 logs.go:123] Gathering logs for kube-apiserver [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878] ...
	I1013 23:08:26.425798  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:26.465120  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:08:26.465152  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:26.527414  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:08:26.527488  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:26.161994  589554 out.go:252] * Updating the running docker "pause-836584" container ...
	I1013 23:08:26.162031  589554 machine.go:93] provisionDockerMachine start ...
	I1013 23:08:26.162117  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:26.184875  589554 main.go:141] libmachine: Using SSH client type: native
	I1013 23:08:26.185217  589554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1013 23:08:26.185227  589554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:08:26.339461  589554 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-836584
	
	I1013 23:08:26.339536  589554 ubuntu.go:182] provisioning hostname "pause-836584"
	I1013 23:08:26.339629  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:26.362136  589554 main.go:141] libmachine: Using SSH client type: native
	I1013 23:08:26.362436  589554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1013 23:08:26.362457  589554 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-836584 && echo "pause-836584" | sudo tee /etc/hostname
	I1013 23:08:26.534714  589554 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-836584
	
	I1013 23:08:26.534835  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:26.556669  589554 main.go:141] libmachine: Using SSH client type: native
	I1013 23:08:26.556979  589554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1013 23:08:26.557003  589554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-836584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-836584/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-836584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:08:26.703540  589554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:08:26.703632  589554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:08:26.703683  589554 ubuntu.go:190] setting up certificates
	I1013 23:08:26.703712  589554 provision.go:84] configureAuth start
	I1013 23:08:26.703816  589554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-836584
	I1013 23:08:26.720783  589554 provision.go:143] copyHostCerts
	I1013 23:08:26.720861  589554 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:08:26.720880  589554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:08:26.720966  589554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:08:26.721075  589554 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:08:26.721081  589554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:08:26.721107  589554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:08:26.721161  589554 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:08:26.721166  589554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:08:26.721188  589554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:08:26.721239  589554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.pause-836584 san=[127.0.0.1 192.168.85.2 localhost minikube pause-836584]
	I1013 23:08:27.760579  589554 provision.go:177] copyRemoteCerts
	I1013 23:08:27.760650  589554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:08:27.760699  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:27.780470  589554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/pause-836584/id_rsa Username:docker}
	I1013 23:08:27.883120  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:08:27.901658  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 23:08:27.919286  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 23:08:27.937395  589554 provision.go:87] duration metric: took 1.233643877s to configureAuth
	I1013 23:08:27.937423  589554 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:08:27.937681  589554 config.go:182] Loaded profile config "pause-836584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:08:27.937799  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:27.955004  589554 main.go:141] libmachine: Using SSH client type: native
	I1013 23:08:27.955333  589554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1013 23:08:27.955357  589554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:08:29.059398  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:08:29.059845  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 23:08:29.059899  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:29.059958  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:29.084873  576356 cri.go:89] found id: "a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:29.084893  576356 cri.go:89] found id: ""
	I1013 23:08:29.084901  576356 logs.go:282] 1 containers: [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878]
	I1013 23:08:29.084978  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:29.088665  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:08:29.088740  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:08:29.113523  576356 cri.go:89] found id: ""
	I1013 23:08:29.113545  576356 logs.go:282] 0 containers: []
	W1013 23:08:29.113560  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:08:29.113567  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:08:29.113621  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:08:29.138843  576356 cri.go:89] found id: ""
	I1013 23:08:29.138866  576356 logs.go:282] 0 containers: []
	W1013 23:08:29.138874  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:08:29.138881  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:08:29.138936  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:08:29.193052  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:29.193076  576356 cri.go:89] found id: ""
	I1013 23:08:29.193085  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:08:29.193142  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:29.197709  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:08:29.197783  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:08:29.234166  576356 cri.go:89] found id: ""
	I1013 23:08:29.234206  576356 logs.go:282] 0 containers: []
	W1013 23:08:29.234216  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:08:29.234225  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:08:29.234293  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:08:29.266991  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:29.267016  576356 cri.go:89] found id: ""
	I1013 23:08:29.267025  576356 logs.go:282] 1 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17]
	I1013 23:08:29.267099  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:29.272390  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:08:29.272462  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:08:29.313348  576356 cri.go:89] found id: ""
	I1013 23:08:29.313371  576356 logs.go:282] 0 containers: []
	W1013 23:08:29.313380  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:08:29.313387  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:08:29.313443  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:08:29.349600  576356 cri.go:89] found id: ""
	I1013 23:08:29.349622  576356 logs.go:282] 0 containers: []
	W1013 23:08:29.349630  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:08:29.349662  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:08:29.349677  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:08:29.367155  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:08:29.367181  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1013 23:08:33.300311  589554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:08:33.300335  589554 machine.go:96] duration metric: took 7.138295607s to provisionDockerMachine
	I1013 23:08:33.300346  589554 start.go:293] postStartSetup for "pause-836584" (driver="docker")
	I1013 23:08:33.300357  589554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:08:33.300417  589554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:08:33.300465  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:33.320149  589554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/pause-836584/id_rsa Username:docker}
	I1013 23:08:33.427764  589554 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:08:33.431317  589554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:08:33.431346  589554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:08:33.431357  589554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:08:33.431415  589554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:08:33.431494  589554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:08:33.431604  589554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:08:33.439744  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:08:33.458466  589554 start.go:296] duration metric: took 158.105017ms for postStartSetup
	I1013 23:08:33.458549  589554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:08:33.458594  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:33.475914  589554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/pause-836584/id_rsa Username:docker}
	I1013 23:08:33.576873  589554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:08:33.582979  589554 fix.go:56] duration metric: took 7.457472962s for fixHost
	I1013 23:08:33.583006  589554 start.go:83] releasing machines lock for "pause-836584", held for 7.457524366s
	I1013 23:08:33.583090  589554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-836584
	I1013 23:08:33.600985  589554 ssh_runner.go:195] Run: cat /version.json
	I1013 23:08:33.601010  589554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:08:33.601046  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:33.601077  589554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-836584
	I1013 23:08:33.618727  589554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/pause-836584/id_rsa Username:docker}
	I1013 23:08:33.625401  589554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/pause-836584/id_rsa Username:docker}
	I1013 23:08:33.810034  589554 ssh_runner.go:195] Run: systemctl --version
	I1013 23:08:33.816658  589554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:08:33.857250  589554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:08:33.861977  589554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:08:33.862048  589554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:08:33.870175  589554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:08:33.870200  589554 start.go:495] detecting cgroup driver to use...
	I1013 23:08:33.870233  589554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:08:33.870279  589554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:08:33.886123  589554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:08:33.898818  589554 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:08:33.898878  589554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:08:33.915577  589554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:08:33.929398  589554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:08:34.072249  589554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:08:34.223230  589554 docker.go:234] disabling docker service ...
	I1013 23:08:34.223297  589554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:08:34.238972  589554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:08:34.252522  589554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:08:34.382988  589554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:08:34.523829  589554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:08:34.537841  589554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:08:34.551996  589554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:08:34.552113  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.561488  589554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:08:34.561562  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.570632  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.580478  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.589537  589554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:08:34.597526  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.606937  589554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.615273  589554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:08:34.625251  589554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:08:34.632791  589554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:08:34.640316  589554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:08:34.779463  589554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:08:34.947966  589554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:08:34.948040  589554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:08:34.951960  589554 start.go:563] Will wait 60s for crictl version
	I1013 23:08:34.952028  589554 ssh_runner.go:195] Run: which crictl
	I1013 23:08:34.955809  589554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:08:34.982279  589554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:08:34.982365  589554 ssh_runner.go:195] Run: crio --version
	I1013 23:08:35.012290  589554 ssh_runner.go:195] Run: crio --version
	I1013 23:08:35.048748  589554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:08:35.051769  589554 cli_runner.go:164] Run: docker network inspect pause-836584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:08:35.069009  589554 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 23:08:35.073408  589554 kubeadm.go:883] updating cluster {Name:pause-836584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-836584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:08:35.073554  589554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:08:35.073629  589554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:08:35.110900  589554 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:08:35.110925  589554 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:08:35.110986  589554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:08:35.136232  589554 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:08:35.136256  589554 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:08:35.136265  589554 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 23:08:35.136379  589554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-836584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-836584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:08:35.136469  589554 ssh_runner.go:195] Run: crio config
	I1013 23:08:35.205434  589554 cni.go:84] Creating CNI manager for ""
	I1013 23:08:35.205459  589554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:08:35.205482  589554 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:08:35.205507  589554 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-836584 NodeName:pause-836584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:08:35.205687  589554 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-836584"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:08:35.205767  589554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:08:35.214450  589554 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:08:35.214569  589554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:08:35.222361  589554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1013 23:08:35.235792  589554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:08:35.249712  589554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1013 23:08:35.262999  589554 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:08:35.266945  589554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:08:35.410521  589554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:08:35.424380  589554 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584 for IP: 192.168.85.2
	I1013 23:08:35.424404  589554 certs.go:195] generating shared ca certs ...
	I1013 23:08:35.424420  589554 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:08:35.424627  589554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:08:35.424697  589554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:08:35.424710  589554 certs.go:257] generating profile certs ...
	I1013 23:08:35.424816  589554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/client.key
	I1013 23:08:35.424905  589554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/apiserver.key.d1c58bc8
	I1013 23:08:35.424988  589554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/proxy-client.key
	I1013 23:08:35.425163  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:08:35.425216  589554 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:08:35.425234  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:08:35.425265  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:08:35.425307  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:08:35.425339  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:08:35.425401  589554 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:08:35.426025  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:08:35.445265  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:08:35.463303  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:08:35.481486  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:08:35.500283  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 23:08:35.518947  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 23:08:35.537149  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:08:35.555241  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 23:08:35.573600  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:08:35.592023  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:08:35.609874  589554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:08:35.627709  589554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:08:35.641284  589554 ssh_runner.go:195] Run: openssl version
	I1013 23:08:35.647997  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:08:35.656632  589554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:08:35.660448  589554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:08:35.660571  589554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:08:35.701913  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:08:35.709923  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:08:35.718236  589554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:08:35.722190  589554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:08:35.722312  589554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:08:35.763227  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:08:35.771131  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:08:35.779877  589554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:08:35.783656  589554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:08:35.783771  589554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:08:35.826262  589554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:08:35.834692  589554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:08:35.838685  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:08:35.880240  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:08:35.921181  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:08:35.962600  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:08:36.014023  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:08:36.056107  589554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:08:36.098291  589554 kubeadm.go:400] StartCluster: {Name:pause-836584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-836584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:08:36.098409  589554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:08:36.098509  589554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:08:36.129529  589554 cri.go:89] found id: "932b463244a48a0b94454f7f8b25fcdb1321327bc02108890c047545d029ad69"
	I1013 23:08:36.129550  589554 cri.go:89] found id: "05d9739f491591937305aba50241654d336905d5b240337bdc5473cfd033d010"
	I1013 23:08:36.129555  589554 cri.go:89] found id: "aa8add4e7d15a8ac00cbc64d8c811002ca4df2c7d35010ed80b0716c6d03c5d9"
	I1013 23:08:36.129559  589554 cri.go:89] found id: "87d15b6caafad41c214e6fddcdcac2921d6badaad67a7da35289ff2b4d03b3b2"
	I1013 23:08:36.129562  589554 cri.go:89] found id: "f762c1241881f6b4b7eadb8f872ebdb0e3014eeb7279ff5eb1a0bda2e10750b0"
	I1013 23:08:36.129565  589554 cri.go:89] found id: "7a4920c5bd3032d85edce22c6b1d7e7faf5b3388d83c5cdedc99275b911fb334"
	I1013 23:08:36.129568  589554 cri.go:89] found id: "ad46aca3d42a7feeaea2b43fc17f4072c073fe9678b4c352198922c9e22c88aa"
	I1013 23:08:36.129571  589554 cri.go:89] found id: ""
	I1013 23:08:36.129622  589554 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 23:08:36.140681  589554 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:08:36Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:08:36.140769  589554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:08:36.148992  589554 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:08:36.149087  589554 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:08:36.149226  589554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:08:36.158317  589554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:08:36.159067  589554 kubeconfig.go:125] found "pause-836584" server: "https://192.168.85.2:8443"
	I1013 23:08:36.159921  589554 kapi.go:59] client config for pause-836584: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/client.key", CAFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 23:08:36.160399  589554 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1013 23:08:36.160418  589554 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1013 23:08:36.160426  589554 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1013 23:08:36.160431  589554 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1013 23:08:36.160435  589554 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1013 23:08:36.160809  589554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:08:36.169489  589554 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 23:08:36.169566  589554 kubeadm.go:601] duration metric: took 20.457709ms to restartPrimaryControlPlane
	I1013 23:08:36.169586  589554 kubeadm.go:402] duration metric: took 71.317932ms to StartCluster
	I1013 23:08:36.169602  589554 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:08:36.169678  589554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:08:36.170552  589554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:08:36.170789  589554 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:08:36.171192  589554 config.go:182] Loaded profile config "pause-836584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:08:36.171136  589554 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:08:36.176216  589554 out.go:179] * Verifying Kubernetes components...
	I1013 23:08:36.176216  589554 out.go:179] * Enabled addons: 
	I1013 23:08:36.179102  589554 addons.go:514] duration metric: took 7.925836ms for enable addons: enabled=[]
	I1013 23:08:36.179140  589554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:08:36.315785  589554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:08:36.329056  589554 node_ready.go:35] waiting up to 6m0s for node "pause-836584" to be "Ready" ...
	W1013 23:08:38.329678  589554 node_ready.go:55] error getting node "pause-836584" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/pause-836584": dial tcp 192.168.85.2:8443: connect: connection refused
	I1013 23:08:39.459904  576356 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.092703029s)
	W1013 23:08:39.459938  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1013 23:08:39.459946  576356 logs.go:123] Gathering logs for kube-apiserver [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878] ...
	I1013 23:08:39.459957  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:39.512010  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:08:39.512083  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:39.600242  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:08:39.600290  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:39.647074  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:08:39.647113  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:08:39.731003  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:08:39.731047  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:08:39.825761  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:08:39.825800  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 23:08:43.720131  589554 node_ready.go:49] node "pause-836584" is "Ready"
	I1013 23:08:43.720157  589554 node_ready.go:38] duration metric: took 7.391068457s for node "pause-836584" to be "Ready" ...
	I1013 23:08:43.720172  589554 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:08:43.720236  589554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:08:43.736558  589554 api_server.go:72] duration metric: took 7.565731408s to wait for apiserver process to appear ...
	I1013 23:08:43.736582  589554 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:08:43.736602  589554 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:08:43.745950  589554 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 23:08:43.746034  589554 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 23:08:44.237728  589554 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:08:44.246353  589554 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 23:08:44.246390  589554 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 23:08:44.736730  589554 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:08:44.745929  589554 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 23:08:44.747021  589554 api_server.go:141] control plane version: v1.34.1
	I1013 23:08:44.747115  589554 api_server.go:131] duration metric: took 1.010525037s to wait for apiserver health ...
	I1013 23:08:44.747133  589554 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:08:44.751045  589554 system_pods.go:59] 7 kube-system pods found
	I1013 23:08:44.751124  589554 system_pods.go:61] "coredns-66bc5c9577-q58xv" [4f11b874-eb7d-44fd-9044-8d0db7aa854f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:08:44.751141  589554 system_pods.go:61] "etcd-pause-836584" [1f2ebe8e-8752-4987-acab-293f657488da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:08:44.751148  589554 system_pods.go:61] "kindnet-bpjsz" [ad8e0981-fc54-4ff2-bb74-451df2da5b37] Running
	I1013 23:08:44.751158  589554 system_pods.go:61] "kube-apiserver-pause-836584" [e16e2454-7502-4521-b9b5-45a1bbc904cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:08:44.751171  589554 system_pods.go:61] "kube-controller-manager-pause-836584" [8a253953-fc36-453c-abc5-55336d81fe35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:08:44.751191  589554 system_pods.go:61] "kube-proxy-kcs2m" [88c502fa-2e77-4baf-a3be-69a82b2da46d] Running
	I1013 23:08:44.751202  589554 system_pods.go:61] "kube-scheduler-pause-836584" [540ed643-d210-4189-9035-72d23e456d08] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:08:44.751219  589554 system_pods.go:74] duration metric: took 4.078993ms to wait for pod list to return data ...
	I1013 23:08:44.751228  589554 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:08:44.754184  589554 default_sa.go:45] found service account: "default"
	I1013 23:08:44.754212  589554 default_sa.go:55] duration metric: took 2.974213ms for default service account to be created ...
	I1013 23:08:44.754223  589554 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:08:44.757773  589554 system_pods.go:86] 7 kube-system pods found
	I1013 23:08:44.757809  589554 system_pods.go:89] "coredns-66bc5c9577-q58xv" [4f11b874-eb7d-44fd-9044-8d0db7aa854f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:08:44.757820  589554 system_pods.go:89] "etcd-pause-836584" [1f2ebe8e-8752-4987-acab-293f657488da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:08:44.757847  589554 system_pods.go:89] "kindnet-bpjsz" [ad8e0981-fc54-4ff2-bb74-451df2da5b37] Running
	I1013 23:08:44.757859  589554 system_pods.go:89] "kube-apiserver-pause-836584" [e16e2454-7502-4521-b9b5-45a1bbc904cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:08:44.757867  589554 system_pods.go:89] "kube-controller-manager-pause-836584" [8a253953-fc36-453c-abc5-55336d81fe35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:08:44.757881  589554 system_pods.go:89] "kube-proxy-kcs2m" [88c502fa-2e77-4baf-a3be-69a82b2da46d] Running
	I1013 23:08:44.757888  589554 system_pods.go:89] "kube-scheduler-pause-836584" [540ed643-d210-4189-9035-72d23e456d08] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:08:44.757896  589554 system_pods.go:126] duration metric: took 3.666686ms to wait for k8s-apps to be running ...
	I1013 23:08:44.757905  589554 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:08:44.757980  589554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:08:44.771815  589554 system_svc.go:56] duration metric: took 13.899106ms WaitForService to wait for kubelet
	I1013 23:08:44.771886  589554 kubeadm.go:586] duration metric: took 8.60106475s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:08:44.771923  589554 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:08:44.775425  589554 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:08:44.775468  589554 node_conditions.go:123] node cpu capacity is 2
	I1013 23:08:44.775481  589554 node_conditions.go:105] duration metric: took 3.551661ms to run NodePressure ...
	I1013 23:08:44.775494  589554 start.go:241] waiting for startup goroutines ...
	I1013 23:08:44.775502  589554 start.go:246] waiting for cluster config update ...
	I1013 23:08:44.775511  589554 start.go:255] writing updated cluster config ...
	I1013 23:08:44.775856  589554 ssh_runner.go:195] Run: rm -f paused
	I1013 23:08:44.779984  589554 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:08:44.780654  589554 kapi.go:59] client config for pause-836584: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/profiles/pause-836584/client.key", CAFile:"/home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 23:08:44.783957  589554 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q58xv" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:42.511093  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1013 23:08:46.790702  589554 pod_ready.go:104] pod "coredns-66bc5c9577-q58xv" is not "Ready", error: <nil>
	W1013 23:08:49.290111  589554 pod_ready.go:104] pod "coredns-66bc5c9577-q58xv" is not "Ready", error: <nil>
	I1013 23:08:47.512031  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1013 23:08:47.512096  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:47.512164  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:47.540436  576356 cri.go:89] found id: "15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:08:47.540457  576356 cri.go:89] found id: "a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:47.540462  576356 cri.go:89] found id: ""
	I1013 23:08:47.540469  576356 logs.go:282] 2 containers: [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684 a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878]
	I1013 23:08:47.540527  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:47.544392  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:47.548124  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:08:47.548194  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:08:47.574835  576356 cri.go:89] found id: ""
	I1013 23:08:47.574860  576356 logs.go:282] 0 containers: []
	W1013 23:08:47.574868  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:08:47.574874  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:08:47.574930  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:08:47.602049  576356 cri.go:89] found id: ""
	I1013 23:08:47.602074  576356 logs.go:282] 0 containers: []
	W1013 23:08:47.602084  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:08:47.602091  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:08:47.602150  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:08:47.632732  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:47.632752  576356 cri.go:89] found id: ""
	I1013 23:08:47.632761  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:08:47.632818  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:47.636923  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:08:47.637002  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:08:47.663488  576356 cri.go:89] found id: ""
	I1013 23:08:47.663514  576356 logs.go:282] 0 containers: []
	W1013 23:08:47.663523  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:08:47.663530  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:08:47.663588  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:08:47.691780  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:47.691813  576356 cri.go:89] found id: ""
	I1013 23:08:47.691822  576356 logs.go:282] 1 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17]
	I1013 23:08:47.691883  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:47.695802  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:08:47.695879  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:08:47.724693  576356 cri.go:89] found id: ""
	I1013 23:08:47.724720  576356 logs.go:282] 0 containers: []
	W1013 23:08:47.724728  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:08:47.724735  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:08:47.724794  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:08:47.762441  576356 cri.go:89] found id: ""
	I1013 23:08:47.762466  576356 logs.go:282] 0 containers: []
	W1013 23:08:47.762474  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:08:47.762489  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:08:47.762503  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:47.835568  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:08:47.835606  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:47.867733  576356 logs.go:123] Gathering logs for kube-apiserver [a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878] ...
	I1013 23:08:47.867761  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5138e8c1af7e93748bfca40758131c88ed8c3191079353422c12c54beb92878"
	I1013 23:08:47.904517  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:08:47.904553  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:08:47.968153  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:08:47.968190  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:08:48.005363  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:08:48.005397  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 23:08:48.120399  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:08:48.120435  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:08:48.136891  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:08:48.136932  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1013 23:08:51.030994  576356 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.894042568s)
	W1013 23:08:51.031046  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:37388->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:37388->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1013 23:08:51.031054  576356 logs.go:123] Gathering logs for kube-apiserver [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684] ...
	I1013 23:08:51.031109  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:08:51.290127  589554 pod_ready.go:94] pod "coredns-66bc5c9577-q58xv" is "Ready"
	I1013 23:08:51.290151  589554 pod_ready.go:86] duration metric: took 6.506165311s for pod "coredns-66bc5c9577-q58xv" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:51.294193  589554 pod_ready.go:83] waiting for pod "etcd-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 23:08:53.300993  589554 pod_ready.go:104] pod "etcd-pause-836584" is not "Ready", error: <nil>
	I1013 23:08:55.799175  589554 pod_ready.go:94] pod "etcd-pause-836584" is "Ready"
	I1013 23:08:55.799208  589554 pod_ready.go:86] duration metric: took 4.504990747s for pod "etcd-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:55.801370  589554 pod_ready.go:83] waiting for pod "kube-apiserver-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:55.806164  589554 pod_ready.go:94] pod "kube-apiserver-pause-836584" is "Ready"
	I1013 23:08:55.806193  589554 pod_ready.go:86] duration metric: took 4.79554ms for pod "kube-apiserver-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:55.808559  589554 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:55.813040  589554 pod_ready.go:94] pod "kube-controller-manager-pause-836584" is "Ready"
	I1013 23:08:55.813068  589554 pod_ready.go:86] duration metric: took 4.485844ms for pod "kube-controller-manager-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:55.815550  589554 pod_ready.go:83] waiting for pod "kube-proxy-kcs2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:53.568705  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:08:53.569209  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 23:08:53.569267  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:53.569324  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:53.596366  576356 cri.go:89] found id: "15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:08:53.596385  576356 cri.go:89] found id: ""
	I1013 23:08:53.596395  576356 logs.go:282] 1 containers: [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684]
	I1013 23:08:53.596452  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:53.600146  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:08:53.600227  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:08:53.628792  576356 cri.go:89] found id: ""
	I1013 23:08:53.628818  576356 logs.go:282] 0 containers: []
	W1013 23:08:53.628827  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:08:53.628834  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:08:53.628893  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:08:53.661296  576356 cri.go:89] found id: ""
	I1013 23:08:53.661319  576356 logs.go:282] 0 containers: []
	W1013 23:08:53.661327  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:08:53.661334  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:08:53.661396  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:08:53.692678  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:53.692704  576356 cri.go:89] found id: ""
	I1013 23:08:53.692713  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:08:53.692767  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:53.696444  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:08:53.696546  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:08:53.723295  576356 cri.go:89] found id: ""
	I1013 23:08:53.723321  576356 logs.go:282] 0 containers: []
	W1013 23:08:53.723338  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:08:53.723363  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:08:53.723445  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:08:53.758927  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:53.758989  576356 cri.go:89] found id: ""
	I1013 23:08:53.759011  576356 logs.go:282] 1 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17]
	I1013 23:08:53.759120  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:53.763365  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:08:53.763463  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:08:53.789306  576356 cri.go:89] found id: ""
	I1013 23:08:53.789330  576356 logs.go:282] 0 containers: []
	W1013 23:08:53.789339  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:08:53.789345  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:08:53.789409  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:08:53.818961  576356 cri.go:89] found id: ""
	I1013 23:08:53.818986  576356 logs.go:282] 0 containers: []
	W1013 23:08:53.818995  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:08:53.819004  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:08:53.819017  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:53.877522  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:08:53.877559  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:53.903631  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:08:53.903660  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:08:53.964689  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:08:53.964725  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:08:53.995510  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:08:53.995536  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 23:08:54.127411  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:08:54.127449  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:08:54.143924  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:08:54.143958  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 23:08:54.218410  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 23:08:54.218478  576356 logs.go:123] Gathering logs for kube-apiserver [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684] ...
	I1013 23:08:54.218505  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:08:55.997875  589554 pod_ready.go:94] pod "kube-proxy-kcs2m" is "Ready"
	I1013 23:08:55.997904  589554 pod_ready.go:86] duration metric: took 182.327999ms for pod "kube-proxy-kcs2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:56.198146  589554 pod_ready.go:83] waiting for pod "kube-scheduler-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:57.797419  589554 pod_ready.go:94] pod "kube-scheduler-pause-836584" is "Ready"
	I1013 23:08:57.797445  589554 pod_ready.go:86] duration metric: took 1.59927148s for pod "kube-scheduler-pause-836584" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:08:57.797457  589554 pod_ready.go:40] duration metric: took 13.01744102s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:08:57.849089  589554 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:08:57.852493  589554 out.go:179] * Done! kubectl is now configured to use "pause-836584" cluster and "default" namespace by default
	I1013 23:08:56.753451  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:08:56.754030  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 23:08:56.754098  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:56.754174  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:56.784147  576356 cri.go:89] found id: "15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:08:56.784225  576356 cri.go:89] found id: ""
	I1013 23:08:56.784241  576356 logs.go:282] 1 containers: [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684]
	I1013 23:08:56.784308  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:56.788260  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:08:56.788342  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:08:56.816777  576356 cri.go:89] found id: ""
	I1013 23:08:56.816798  576356 logs.go:282] 0 containers: []
	W1013 23:08:56.816806  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:08:56.816813  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:08:56.816873  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:08:56.856095  576356 cri.go:89] found id: ""
	I1013 23:08:56.856120  576356 logs.go:282] 0 containers: []
	W1013 23:08:56.856131  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:08:56.856138  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:08:56.856194  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:08:56.885991  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:56.886014  576356 cri.go:89] found id: ""
	I1013 23:08:56.886023  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:08:56.886079  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:56.890460  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:08:56.890532  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:08:56.919768  576356 cri.go:89] found id: ""
	I1013 23:08:56.919792  576356 logs.go:282] 0 containers: []
	W1013 23:08:56.919800  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:08:56.919806  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:08:56.919863  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:08:56.948597  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:56.948621  576356 cri.go:89] found id: ""
	I1013 23:08:56.948630  576356 logs.go:282] 1 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17]
	I1013 23:08:56.948697  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:08:56.952539  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:08:56.952607  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:08:56.979255  576356 cri.go:89] found id: ""
	I1013 23:08:56.979276  576356 logs.go:282] 0 containers: []
	W1013 23:08:56.979285  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:08:56.979291  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:08:56.979356  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:08:57.020872  576356 cri.go:89] found id: ""
	I1013 23:08:57.020904  576356 logs.go:282] 0 containers: []
	W1013 23:08:57.020914  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:08:57.020923  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:08:57.020935  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:08:57.039001  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:08:57.039031  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 23:08:57.110638  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 23:08:57.110662  576356 logs.go:123] Gathering logs for kube-apiserver [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684] ...
	I1013 23:08:57.110676  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:08:57.143061  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:08:57.143170  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:08:57.211546  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:08:57.211585  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:08:57.241787  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:08:57.241817  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:08:57.303533  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:08:57.303569  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:08:57.335105  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:08:57.335131  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 23:08:59.948838  576356 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:08:59.949252  576356 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1013 23:08:59.949297  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 23:08:59.949349  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 23:08:59.989012  576356 cri.go:89] found id: "15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:08:59.989030  576356 cri.go:89] found id: ""
	I1013 23:08:59.989039  576356 logs.go:282] 1 containers: [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684]
	I1013 23:08:59.989102  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:09:00.018239  576356 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 23:09:00.024337  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 23:09:00.213897  576356 cri.go:89] found id: ""
	I1013 23:09:00.214002  576356 logs.go:282] 0 containers: []
	W1013 23:09:00.214035  576356 logs.go:284] No container was found matching "etcd"
	I1013 23:09:00.214077  576356 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 23:09:00.214216  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 23:09:00.314807  576356 cri.go:89] found id: ""
	I1013 23:09:00.314831  576356 logs.go:282] 0 containers: []
	W1013 23:09:00.314840  576356 logs.go:284] No container was found matching "coredns"
	I1013 23:09:00.314847  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 23:09:00.314911  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 23:09:00.364608  576356 cri.go:89] found id: "f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:09:00.364632  576356 cri.go:89] found id: ""
	I1013 23:09:00.364642  576356 logs.go:282] 1 containers: [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87]
	I1013 23:09:00.365583  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:09:00.377361  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 23:09:00.377496  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 23:09:00.415118  576356 cri.go:89] found id: ""
	I1013 23:09:00.415144  576356 logs.go:282] 0 containers: []
	W1013 23:09:00.415152  576356 logs.go:284] No container was found matching "kube-proxy"
	I1013 23:09:00.415160  576356 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 23:09:00.415225  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 23:09:00.450713  576356 cri.go:89] found id: "e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:09:00.450738  576356 cri.go:89] found id: ""
	I1013 23:09:00.450747  576356 logs.go:282] 1 containers: [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17]
	I1013 23:09:00.450857  576356 ssh_runner.go:195] Run: which crictl
	I1013 23:09:00.456630  576356 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 23:09:00.456709  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 23:09:00.489588  576356 cri.go:89] found id: ""
	I1013 23:09:00.489619  576356 logs.go:282] 0 containers: []
	W1013 23:09:00.489628  576356 logs.go:284] No container was found matching "kindnet"
	I1013 23:09:00.489635  576356 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1013 23:09:00.489699  576356 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1013 23:09:00.530969  576356 cri.go:89] found id: ""
	I1013 23:09:00.531105  576356 logs.go:282] 0 containers: []
	W1013 23:09:00.531122  576356 logs.go:284] No container was found matching "storage-provisioner"
	I1013 23:09:00.531133  576356 logs.go:123] Gathering logs for dmesg ...
	I1013 23:09:00.531152  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 23:09:00.568750  576356 logs.go:123] Gathering logs for describe nodes ...
	I1013 23:09:00.568780  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 23:09:00.693944  576356 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 23:09:00.693968  576356 logs.go:123] Gathering logs for kube-apiserver [15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684] ...
	I1013 23:09:00.693980  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15a9084f7d5603a6ad1c8df00c2ded8eae68f6156af0022210635541c0537684"
	I1013 23:09:00.735737  576356 logs.go:123] Gathering logs for kube-scheduler [f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87] ...
	I1013 23:09:00.735779  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3945758e98dc9f960cdcd6b7be79cd801f38c1826e075b1aea02092ce7a6c87"
	I1013 23:09:00.805771  576356 logs.go:123] Gathering logs for kube-controller-manager [e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17] ...
	I1013 23:09:00.805807  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e564692ae2bf36d0e0f9ee91ec6d8bf0a7ddc6caf774fde42cc588aef22ccd17"
	I1013 23:09:00.836435  576356 logs.go:123] Gathering logs for CRI-O ...
	I1013 23:09:00.836468  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 23:09:00.920633  576356 logs.go:123] Gathering logs for container status ...
	I1013 23:09:00.920720  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 23:09:00.958277  576356 logs.go:123] Gathering logs for kubelet ...
	I1013 23:09:00.958305  576356 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.003478073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.011203232Z" level=info msg="Created container 1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca: kube-system/kindnet-bpjsz/kindnet-cni" id=9e446384-1086-4101-8b59-7b19444680ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.013510227Z" level=info msg="Starting container: 1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca" id=6a280df7-2541-4add-8ce3-3d72ffd9e5a7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.01594521Z" level=info msg="Started container" PID=2361 containerID=1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca description=kube-system/kindnet-bpjsz/kindnet-cni id=6a280df7-2541-4add-8ce3-3d72ffd9e5a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=df2b5156466d11e06aacfc2d7317ffc1388da1b351a19278b22701d925027df4
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.034185573Z" level=info msg="Created container 82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec: kube-system/coredns-66bc5c9577-q58xv/coredns" id=c7a4db3a-504f-4b76-8d10-a2fe826f7a5b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.036280182Z" level=info msg="Starting container: 82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec" id=cefe4b5f-810b-44f6-9681-16bf4cc53475 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.037874158Z" level=info msg="Started container" PID=2369 containerID=82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec description=kube-system/coredns-66bc5c9577-q58xv/coredns id=cefe4b5f-810b-44f6-9681-16bf4cc53475 name=/runtime.v1.RuntimeService/StartContainer sandboxID=47fad4b7d78b0959ee943b82956985e590d6ae16319ace36406fd33438e21e04
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.496450909Z" level=info msg="Created container f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62: kube-system/kube-proxy-kcs2m/kube-proxy" id=e2017455-8108-440b-88c4-f99083ea48ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.497879924Z" level=info msg="Starting container: f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62" id=f4cd814e-48f7-41cc-9541-5221dbd60175 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:08:39 pause-836584 crio[2042]: time="2025-10-13T23:08:39.501016702Z" level=info msg="Started container" PID=2356 containerID=f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62 description=kube-system/kube-proxy-kcs2m/kube-proxy id=f4cd814e-48f7-41cc-9541-5221dbd60175 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d11765ef364d8c8e4cd2fb273cc62bca12ef79d529e97221b2f95bc599fcf2b
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.406179093Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.409728465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.409762909Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.409788804Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.413099338Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.41313448Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.41315811Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.416383387Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.416418594Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.416442946Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.42023146Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.420266068Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.42029111Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.423544521Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:08:49 pause-836584 crio[2042]: time="2025-10-13T23:08:49.423578654Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	82b32b0378d02       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   47fad4b7d78b0       coredns-66bc5c9577-q58xv               kube-system
	1b5152b89b473       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   df2b5156466d1       kindnet-bpjsz                          kube-system
	f6343d4559199       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   24 seconds ago       Running             kube-proxy                1                   6d11765ef364d       kube-proxy-kcs2m                       kube-system
	58af9ab254a42       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   90bc0139660f4       etcd-pause-836584                      kube-system
	cebeaada2da3d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   16d57ec78898d       kube-scheduler-pause-836584            kube-system
	cc7bd33116bc4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   1a505af9bc53c       kube-controller-manager-pause-836584   kube-system
	847763d2657e4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   25 seconds ago       Running             kube-apiserver            1                   5e69143022fa9       kube-apiserver-pause-836584            kube-system
	932b463244a48       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   40 seconds ago       Exited              coredns                   0                   47fad4b7d78b0       coredns-66bc5c9577-q58xv               kube-system
	05d9739f49159       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   6d11765ef364d       kube-proxy-kcs2m                       kube-system
	aa8add4e7d15a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   df2b5156466d1       kindnet-bpjsz                          kube-system
	87d15b6caafad       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   16d57ec78898d       kube-scheduler-pause-836584            kube-system
	f762c1241881f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   5e69143022fa9       kube-apiserver-pause-836584            kube-system
	7a4920c5bd303       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   90bc0139660f4       etcd-pause-836584                      kube-system
	ad46aca3d42a7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   1a505af9bc53c       kube-controller-manager-pause-836584   kube-system
	
	
	==> coredns [82b32b0378d0293a6afc8c915e0683617833af79028aaa6265914f6a5a9eb7ec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36789 - 53866 "HINFO IN 3349287790880305469.4701604218403196471. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031464924s
	
	
	==> coredns [932b463244a48a0b94454f7f8b25fcdb1321327bc02108890c047545d029ad69] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44100 - 21424 "HINFO IN 5086195233547606467.5478536643543823063. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020337465s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-836584
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-836584
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=pause-836584
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_07_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:07:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-836584
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:08:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:08:48 +0000   Mon, 13 Oct 2025 23:07:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:08:48 +0000   Mon, 13 Oct 2025 23:07:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:08:48 +0000   Mon, 13 Oct 2025 23:07:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:08:48 +0000   Mon, 13 Oct 2025 23:08:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-836584
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                e5dc741b-862c-4ecb-bae2-b81fa7a53143
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-q58xv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     83s
	  kube-system                 etcd-pause-836584                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         88s
	  kube-system                 kindnet-bpjsz                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      83s
	  kube-system                 kube-apiserver-pause-836584             250m (12%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-controller-manager-pause-836584    200m (10%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-kcs2m                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-pause-836584             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 81s                kube-proxy       
	  Normal   Starting                 20s                kube-proxy       
	  Normal   NodeHasSufficientMemory  95s (x8 over 95s)  kubelet          Node pause-836584 status is now: NodeHasSufficientMemory
	  Normal   Starting                 95s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 95s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    95s (x8 over 95s)  kubelet          Node pause-836584 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s (x8 over 95s)  kubelet          Node pause-836584 status is now: NodeHasSufficientPID
	  Normal   Starting                 88s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s                kubelet          Node pause-836584 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s                kubelet          Node pause-836584 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s                kubelet          Node pause-836584 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           84s                node-controller  Node pause-836584 event: Registered Node pause-836584 in Controller
	  Normal   NodeReady                42s                kubelet          Node pause-836584 status is now: NodeReady
	  Warning  ContainerGCFailed        28s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           18s                node-controller  Node pause-836584 event: Registered Node pause-836584 in Controller
	
	
	==> dmesg <==
	[ +36.310157] overlayfs: idmapped layers are currently not supported
	[Oct13 22:41] overlayfs: idmapped layers are currently not supported
	[Oct13 22:42] overlayfs: idmapped layers are currently not supported
	[  +4.001885] overlayfs: idmapped layers are currently not supported
	[Oct13 22:43] overlayfs: idmapped layers are currently not supported
	[Oct13 22:44] overlayfs: idmapped layers are currently not supported
	[Oct13 22:45] overlayfs: idmapped layers are currently not supported
	[Oct13 22:50] overlayfs: idmapped layers are currently not supported
	[Oct13 22:51] overlayfs: idmapped layers are currently not supported
	[Oct13 22:52] overlayfs: idmapped layers are currently not supported
	[Oct13 22:53] overlayfs: idmapped layers are currently not supported
	[Oct13 22:54] overlayfs: idmapped layers are currently not supported
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [58af9ab254a42fc1621d3c20990e94448e872636dcb69301be14e6dd6a30eeac] <==
	{"level":"warn","ts":"2025-10-13T23:08:41.854238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:41.872085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:41.894339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:41.927273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:41.951296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:41.981594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.012191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.066380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.095614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.172087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.187165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.252696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.256868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.283356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.300077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.318726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.338310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.355169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.372430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.389508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.420806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.455953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.466844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.483946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:08:42.598812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34572","server-name":"","error":"EOF"}
	
	
	==> etcd [7a4920c5bd3032d85edce22c6b1d7e7faf5b3388d83c5cdedc99275b911fb334] <==
	{"level":"warn","ts":"2025-10-13T23:07:32.525358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.540506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.562304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.587633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.617060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.628604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:07:32.722873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49656","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T23:08:28.128096Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T23:08:28.128146Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-836584","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-13T23:08:28.128248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T23:08:28.274687Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-13T23:08:28.274846Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T23:08:28.274886Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T23:08:28.274896Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-13T23:08:28.274861Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T23:08:28.274952Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T23:08:28.274967Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T23:08:28.274974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T23:08:28.274996Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-13T23:08:28.275066Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-13T23:08:28.275064Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T23:08:28.278207Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-13T23:08:28.278283Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T23:08:28.278314Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T23:08:28.278322Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-836584","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 23:09:04 up  2:51,  0 user,  load average: 1.93, 2.27, 2.06
	Linux pause-836584 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b5152b89b4731aaa5e3f707ce37ba37467cf759163010a890fab0b638e646ca] <==
	I1013 23:08:39.127780       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:08:39.128160       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:08:39.128353       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:08:39.128394       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:08:39.128458       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:08:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:08:39.406282       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:08:39.406384       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:08:39.406476       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:08:39.410494       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 23:08:43.809671       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:08:43.809708       1 metrics.go:72] Registering metrics
	I1013 23:08:43.809778       1 controller.go:711] "Syncing nftables rules"
	I1013 23:08:49.405833       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:08:49.405892       1 main.go:301] handling current node
	I1013 23:08:59.406428       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:08:59.406495       1 main.go:301] handling current node
	
	
	==> kindnet [aa8add4e7d15a8ac00cbc64d8c811002ca4df2c7d35010ed80b0716c6d03c5d9] <==
	I1013 23:07:42.207895       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:07:42.208439       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:07:42.208647       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:07:42.208696       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:07:42.208736       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:07:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:07:42.409223       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:07:42.409301       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:07:42.409336       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:07:42.409478       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:08:12.410037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:08:12.410036       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 23:08:12.410259       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 23:08:12.503791       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1013 23:08:13.910437       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:08:13.910479       1 metrics.go:72] Registering metrics
	I1013 23:08:13.910553       1 controller.go:711] "Syncing nftables rules"
	I1013 23:08:22.415202       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:08:22.415262       1 main.go:301] handling current node
	
	
	==> kube-apiserver [847763d2657e4ac8786f744228c853320d4ec0e12d75ac4e02a1aa292b61ebbd] <==
	I1013 23:08:43.628016       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 23:08:43.648193       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:08:43.663792       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 23:08:43.698151       1 aggregator.go:171] initial CRD sync complete...
	I1013 23:08:43.698287       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 23:08:43.698322       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 23:08:43.698371       1 cache.go:39] Caches are synced for autoregister controller
	I1013 23:08:43.698586       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1013 23:08:43.699700       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:08:43.700602       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 23:08:43.703989       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 23:08:43.704598       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 23:08:43.704620       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 23:08:43.704722       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 23:08:43.704811       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 23:08:43.704849       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 23:08:43.713967       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 23:08:43.716280       1 policy_source.go:240] refreshing policies
	I1013 23:08:43.754304       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:08:44.319732       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:08:45.514938       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:08:46.996991       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 23:08:47.155389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:08:47.198430       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:08:47.299554       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [f762c1241881f6b4b7eadb8f872ebdb0e3014eeb7279ff5eb1a0bda2e10750b0] <==
	W1013 23:08:28.149795       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.149902       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150001       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150108       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150234       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150569       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150711       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150829       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.150934       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151037       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151312       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151629       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151729       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151785       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151864       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151917       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.151971       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152021       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152074       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152130       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152178       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152223       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152271       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152323       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 23:08:28.152370       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [ad46aca3d42a7feeaea2b43fc17f4072c073fe9678b4c352198922c9e22c88aa] <==
	I1013 23:07:40.826524       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 23:07:40.839494       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 23:07:40.827061       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 23:07:40.828944       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 23:07:40.826907       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:07:40.826968       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 23:07:40.827023       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 23:07:40.853938       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:07:40.854126       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 23:07:40.854200       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 23:07:40.854274       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-836584"
	I1013 23:07:40.854311       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 23:07:40.854336       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 23:07:40.861675       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:07:40.863776       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 23:07:40.863907       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:07:40.864288       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:07:40.864632       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 23:07:40.879004       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 23:07:40.879960       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 23:07:40.879174       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 23:07:40.887995       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 23:07:40.888361       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 23:07:40.906763       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 23:08:25.862387       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [cc7bd33116bc4acc38466c7d562ff96f74af865dd7aa4909cb16a23f999c0b25] <==
	I1013 23:08:46.946464       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:08:46.948290       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 23:08:46.948351       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 23:08:46.948373       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 23:08:46.948379       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 23:08:46.948385       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 23:08:46.948464       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 23:08:46.948553       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 23:08:46.953363       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:08:46.955488       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:08:46.967964       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:08:46.974164       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 23:08:46.976506       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 23:08:46.978722       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 23:08:46.984970       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 23:08:46.989527       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 23:08:46.989933       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:08:46.989999       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:08:46.990193       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:08:46.990678       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 23:08:46.990840       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 23:08:46.990873       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 23:08:46.999853       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:08:46.999877       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 23:08:46.999886       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [05d9739f491591937305aba50241654d336905d5b240337bdc5473cfd033d010] <==
	I1013 23:07:42.181818       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:07:42.275228       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:07:42.375759       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:07:42.375890       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 23:07:42.376001       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:07:42.397075       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:07:42.397130       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:07:42.401626       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:07:42.401958       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:07:42.402033       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:07:42.415276       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:07:42.415361       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:07:42.415412       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:07:42.415441       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:07:42.417497       1 config.go:309] "Starting node config controller"
	I1013 23:07:42.420385       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:07:42.420402       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:07:42.418093       1 config.go:200] "Starting service config controller"
	I1013 23:07:42.420412       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:07:42.516089       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 23:07:42.516089       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:07:42.521311       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [f6343d4559199a39788bf30d33800b5e41a931cd6e80620e52c90fa8180c2b62] <==
	I1013 23:08:39.777416       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:08:41.700276       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:08:43.721851       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:08:43.721946       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 23:08:43.722051       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:08:43.769795       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:08:43.769859       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:08:43.778511       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:08:43.778825       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:08:43.779003       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:08:43.780337       1 config.go:200] "Starting service config controller"
	I1013 23:08:43.780400       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:08:43.780453       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:08:43.780481       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:08:43.780517       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:08:43.780545       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:08:43.781241       1 config.go:309] "Starting node config controller"
	I1013 23:08:43.781295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:08:43.781325       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:08:43.881085       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:08:43.881274       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:08:43.881297       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [87d15b6caafad41c214e6fddcdcac2921d6badaad67a7da35289ff2b4d03b3b2] <==
	E1013 23:07:34.432637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1013 23:07:34.438784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 23:07:34.438953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 23:07:34.439045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 23:07:34.439403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 23:07:34.441413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 23:07:34.441527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 23:07:34.441614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 23:07:34.441697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 23:07:34.441781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 23:07:34.441877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 23:07:34.441989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 23:07:34.442058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 23:07:34.442127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 23:07:34.442229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 23:07:34.442543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 23:07:34.442655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 23:07:34.442706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1013 23:07:35.723717       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:08:28.136819       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 23:08:28.136918       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 23:08:28.136941       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 23:08:28.136968       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:08:28.137022       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 23:08:28.137036       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cebeaada2da3d992b3ba1b12610ac388621e7bd8f90e348a4c320078cffa1b8c] <==
	I1013 23:08:43.030005       1 serving.go:386] Generated self-signed cert in-memory
	I1013 23:08:44.193229       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 23:08:44.193341       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:08:44.199173       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:08:44.199258       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 23:08:44.199358       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 23:08:44.199419       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 23:08:44.201817       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:08:44.201913       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:08:44.201962       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:08:44.202012       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:08:44.300168       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 23:08:44.302590       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:08:44.302588       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.883755    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d1af56e764d91efdb60c316f3e92a2cb" pod="kube-system/etcd-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.884073    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="db44d70b208947530121dd46b1b98199" pod="kube-system/kube-apiserver-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.884353    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6c55ceb518647e4a5902987f8b8c68dd" pod="kube-system/kube-controller-manager-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: I1013 23:08:38.899688    1292 scope.go:117] "RemoveContainer" containerID="aa8add4e7d15a8ac00cbc64d8c811002ca4df2c7d35010ed80b0716c6d03c5d9"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.900226    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6c55ceb518647e4a5902987f8b8c68dd" pod="kube-system/kube-controller-manager-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.900666    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-bpjsz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ad8e0981-fc54-4ff2-bb74-451df2da5b37" pod="kube-system/kindnet-bpjsz"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.900964    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcs2m\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="88c502fa-2e77-4baf-a3be-69a82b2da46d" pod="kube-system/kube-proxy-kcs2m"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.901232    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d1ab5559a60738a3557e810e33ac5fbd" pod="kube-system/kube-scheduler-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.901483    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d1af56e764d91efdb60c316f3e92a2cb" pod="kube-system/etcd-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.901742    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="db44d70b208947530121dd46b1b98199" pod="kube-system/kube-apiserver-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: I1013 23:08:38.925045    1292 scope.go:117] "RemoveContainer" containerID="932b463244a48a0b94454f7f8b25fcdb1321327bc02108890c047545d029ad69"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.925604    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d1ab5559a60738a3557e810e33ac5fbd" pod="kube-system/kube-scheduler-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.925766    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d1af56e764d91efdb60c316f3e92a2cb" pod="kube-system/etcd-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.925911    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="db44d70b208947530121dd46b1b98199" pod="kube-system/kube-apiserver-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.926067    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-836584\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6c55ceb518647e4a5902987f8b8c68dd" pod="kube-system/kube-controller-manager-pause-836584"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.926209    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-bpjsz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ad8e0981-fc54-4ff2-bb74-451df2da5b37" pod="kube-system/kindnet-bpjsz"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.926375    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kcs2m\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="88c502fa-2e77-4baf-a3be-69a82b2da46d" pod="kube-system/kube-proxy-kcs2m"
	Oct 13 23:08:38 pause-836584 kubelet[1292]: E1013 23:08:38.926980    1292 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q58xv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4f11b874-eb7d-44fd-9044-8d0db7aa854f" pod="kube-system/coredns-66bc5c9577-q58xv"
	Oct 13 23:08:43 pause-836584 kubelet[1292]: E1013 23:08:43.381701    1292 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-836584\" is forbidden: User \"system:node:pause-836584\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-836584' and this object" podUID="d1ab5559a60738a3557e810e33ac5fbd" pod="kube-system/kube-scheduler-pause-836584"
	Oct 13 23:08:43 pause-836584 kubelet[1292]: E1013 23:08:43.460099    1292 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-836584\" is forbidden: User \"system:node:pause-836584\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-836584' and this object" podUID="d1af56e764d91efdb60c316f3e92a2cb" pod="kube-system/etcd-pause-836584"
	Oct 13 23:08:43 pause-836584 kubelet[1292]: E1013 23:08:43.587301    1292 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-836584\" is forbidden: User \"system:node:pause-836584\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-836584' and this object" podUID="db44d70b208947530121dd46b1b98199" pod="kube-system/kube-apiserver-pause-836584"
	Oct 13 23:08:46 pause-836584 kubelet[1292]: W1013 23:08:46.823362    1292 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 13 23:08:58 pause-836584 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:08:58 pause-836584 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:08:58 pause-836584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-836584 -n pause-836584
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-836584 -n pause-836584: exit status 2 (381.611243ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-836584 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-670275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-670275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (280.274972ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:12:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-670275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-670275 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-670275 describe deploy/metrics-server -n kube-system: exit status 1 (94.95091ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-670275 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-670275
helpers_test.go:243: (dbg) docker inspect old-k8s-version-670275:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d",
	        "Created": "2025-10-13T23:11:32.172538967Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 607274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:11:32.236430287Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/hosts",
	        "LogPath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d-json.log",
	        "Name": "/old-k8s-version-670275",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-670275:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-670275",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d",
	                "LowerDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-670275",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-670275/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-670275",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-670275",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-670275",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c4da5c28588739e06135f0c639dc7cdb4091993f4794e699f453786610cdf2ab",
	            "SandboxKey": "/var/run/docker/netns/c4da5c285887",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-670275": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:00:f3:43:29:81",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6af44596a4f08c38fd2f582b08ef6f7af936522e458a8a952d1d21c07e6e39f9",
	                    "EndpointID": "549b62631ada5e7f0f3e874877a8306b1d84cfafe8a1ea06269406d2f4d28702",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-670275",
	                        "d5a910fa7ea2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670275 -n old-k8s-version-670275
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-670275 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-670275 logs -n 25: (1.168457936s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-557095 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo containerd config dump                                                                                                                                                                                                  │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo crio config                                                                                                                                                                                                             │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ delete  │ -p cilium-557095                                                                                                                                                                                                                              │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p force-systemd-env-255188 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-255188  │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ delete  │ -p kubernetes-upgrade-211312                                                                                                                                                                                                                  │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ delete  │ -p force-systemd-env-255188                                                                                                                                                                                                                   │ force-systemd-env-255188  │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-896873    │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p cert-options-051941 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:11 UTC │
	│ ssh     │ cert-options-051941 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ ssh     │ -p cert-options-051941 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ delete  │ -p cert-options-051941                                                                                                                                                                                                                        │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:12 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:11:26
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:11:26.175263  606886 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:11:26.175435  606886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:11:26.175444  606886 out.go:374] Setting ErrFile to fd 2...
	I1013 23:11:26.175450  606886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:11:26.175714  606886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:11:26.176145  606886 out.go:368] Setting JSON to false
	I1013 23:11:26.177080  606886 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10423,"bootTime":1760386664,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:11:26.177150  606886 start.go:141] virtualization:  
	I1013 23:11:26.180768  606886 out.go:179] * [old-k8s-version-670275] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:11:26.184824  606886 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:11:26.184910  606886 notify.go:220] Checking for updates...
	I1013 23:11:26.190981  606886 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:11:26.194042  606886 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:11:26.197104  606886 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:11:26.200210  606886 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:11:26.203245  606886 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:11:26.206806  606886 config.go:182] Loaded profile config "cert-expiration-896873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:11:26.206962  606886 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:11:26.229185  606886 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:11:26.229330  606886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:11:26.297671  606886 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:11:26.288377403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:11:26.297783  606886 docker.go:318] overlay module found
	I1013 23:11:26.300915  606886 out.go:179] * Using the docker driver based on user configuration
	I1013 23:11:26.303826  606886 start.go:305] selected driver: docker
	I1013 23:11:26.303847  606886 start.go:925] validating driver "docker" against <nil>
	I1013 23:11:26.303861  606886 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:11:26.304582  606886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:11:26.361736  606886 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:11:26.352331834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:11:26.361893  606886 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 23:11:26.362133  606886 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:11:26.365116  606886 out.go:179] * Using Docker driver with root privileges
	I1013 23:11:26.367942  606886 cni.go:84] Creating CNI manager for ""
	I1013 23:11:26.368016  606886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:11:26.368025  606886 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 23:11:26.368115  606886 start.go:349] cluster config:
	{Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:11:26.371160  606886 out.go:179] * Starting "old-k8s-version-670275" primary control-plane node in "old-k8s-version-670275" cluster
	I1013 23:11:26.374016  606886 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:11:26.377062  606886 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:11:26.379913  606886 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 23:11:26.379937  606886 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:11:26.379973  606886 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1013 23:11:26.379983  606886 cache.go:58] Caching tarball of preloaded images
	I1013 23:11:26.380060  606886 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:11:26.380071  606886 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1013 23:11:26.380179  606886 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/config.json ...
	I1013 23:11:26.380200  606886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/config.json: {Name:mk596a3c6affd0cca4fb37ad169e168dae1be0ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:11:26.398099  606886 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:11:26.398134  606886 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:11:26.398161  606886 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:11:26.398189  606886 start.go:360] acquireMachinesLock for old-k8s-version-670275: {Name:mk06171e4a123ca0a835c4c644ea27e36804aedc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:11:26.398299  606886 start.go:364] duration metric: took 88.597µs to acquireMachinesLock for "old-k8s-version-670275"
	I1013 23:11:26.398333  606886 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:11:26.398414  606886 start.go:125] createHost starting for "" (driver="docker")
	I1013 23:11:26.401787  606886 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 23:11:26.402053  606886 start.go:159] libmachine.API.Create for "old-k8s-version-670275" (driver="docker")
	I1013 23:11:26.402105  606886 client.go:168] LocalClient.Create starting
	I1013 23:11:26.402182  606886 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem
	I1013 23:11:26.402222  606886 main.go:141] libmachine: Decoding PEM data...
	I1013 23:11:26.402239  606886 main.go:141] libmachine: Parsing certificate...
	I1013 23:11:26.402294  606886 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem
	I1013 23:11:26.402317  606886 main.go:141] libmachine: Decoding PEM data...
	I1013 23:11:26.402332  606886 main.go:141] libmachine: Parsing certificate...
	I1013 23:11:26.402701  606886 cli_runner.go:164] Run: docker network inspect old-k8s-version-670275 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 23:11:26.419815  606886 cli_runner.go:211] docker network inspect old-k8s-version-670275 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 23:11:26.419909  606886 network_create.go:284] running [docker network inspect old-k8s-version-670275] to gather additional debugging logs...
	I1013 23:11:26.419930  606886 cli_runner.go:164] Run: docker network inspect old-k8s-version-670275
	W1013 23:11:26.438791  606886 cli_runner.go:211] docker network inspect old-k8s-version-670275 returned with exit code 1
	I1013 23:11:26.438878  606886 network_create.go:287] error running [docker network inspect old-k8s-version-670275]: docker network inspect old-k8s-version-670275: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-670275 not found
	I1013 23:11:26.438910  606886 network_create.go:289] output of [docker network inspect old-k8s-version-670275]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-670275 not found
	
	** /stderr **
	I1013 23:11:26.439025  606886 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:11:26.456581  606886 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-daf8f67114ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:2a:b3:49:6d:63} reservation:<nil>}
	I1013 23:11:26.456791  606886 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-57d99f1e9609 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:17:72:4c:c8:ba} reservation:<nil>}
	I1013 23:11:26.457069  606886 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-33ec4a6ec514 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:b6:7d:bc:fd} reservation:<nil>}
	I1013 23:11:26.457413  606886 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-024e25e32c1f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:95:c4:95:56:a6} reservation:<nil>}
	I1013 23:11:26.457861  606886 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a7e130}
	I1013 23:11:26.457885  606886 network_create.go:124] attempt to create docker network old-k8s-version-670275 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 23:11:26.457950  606886 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-670275 old-k8s-version-670275
	I1013 23:11:26.520313  606886 network_create.go:108] docker network old-k8s-version-670275 192.168.85.0/24 created
	I1013 23:11:26.520347  606886 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-670275" container
	I1013 23:11:26.520420  606886 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 23:11:26.536745  606886 cli_runner.go:164] Run: docker volume create old-k8s-version-670275 --label name.minikube.sigs.k8s.io=old-k8s-version-670275 --label created_by.minikube.sigs.k8s.io=true
	I1013 23:11:26.555132  606886 oci.go:103] Successfully created a docker volume old-k8s-version-670275
	I1013 23:11:26.555220  606886 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-670275-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-670275 --entrypoint /usr/bin/test -v old-k8s-version-670275:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 23:11:27.084156  606886 oci.go:107] Successfully prepared a docker volume old-k8s-version-670275
	I1013 23:11:27.084214  606886 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 23:11:27.084235  606886 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 23:11:27.084308  606886 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-670275:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 23:11:32.096194  606886 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-670275:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (5.011835849s)
	I1013 23:11:32.096240  606886 kic.go:203] duration metric: took 5.01200067s to extract preloaded images to volume ...
	W1013 23:11:32.096424  606886 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 23:11:32.096554  606886 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 23:11:32.157033  606886 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-670275 --name old-k8s-version-670275 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-670275 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-670275 --network old-k8s-version-670275 --ip 192.168.85.2 --volume old-k8s-version-670275:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 23:11:32.472887  606886 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Running}}
	I1013 23:11:32.497949  606886 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:11:32.525261  606886 cli_runner.go:164] Run: docker exec old-k8s-version-670275 stat /var/lib/dpkg/alternatives/iptables
	I1013 23:11:32.578546  606886 oci.go:144] the created container "old-k8s-version-670275" has a running status.
	I1013 23:11:32.578584  606886 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa...
	I1013 23:11:33.032406  606886 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 23:11:33.053608  606886 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:11:33.073225  606886 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 23:11:33.073245  606886 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-670275 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 23:11:33.117799  606886 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:11:33.142717  606886 machine.go:93] provisionDockerMachine start ...
	I1013 23:11:33.142824  606886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:11:33.162647  606886 main.go:141] libmachine: Using SSH client type: native
	I1013 23:11:33.163026  606886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1013 23:11:33.163047  606886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:11:33.163807  606886 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60784->127.0.0.1:33444: read: connection reset by peer
	I1013 23:11:36.311018  606886 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-670275
	
	I1013 23:11:36.311065  606886 ubuntu.go:182] provisioning hostname "old-k8s-version-670275"
	I1013 23:11:36.311177  606886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:11:36.328307  606886 main.go:141] libmachine: Using SSH client type: native
	I1013 23:11:36.328621  606886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1013 23:11:36.328639  606886 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-670275 && echo "old-k8s-version-670275" | sudo tee /etc/hostname
	I1013 23:11:36.484276  606886 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-670275
	
	I1013 23:11:36.484389  606886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:11:36.502135  606886 main.go:141] libmachine: Using SSH client type: native
	I1013 23:11:36.502471  606886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1013 23:11:36.502495  606886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-670275' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-670275/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-670275' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:11:36.647351  606886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:11:36.647375  606886 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:11:36.647394  606886 ubuntu.go:190] setting up certificates
	I1013 23:11:36.647404  606886 provision.go:84] configureAuth start
	I1013 23:11:36.647465  606886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670275
	I1013 23:11:36.664888  606886 provision.go:143] copyHostCerts
	I1013 23:11:36.664956  606886 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:11:36.664965  606886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:11:36.665048  606886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:11:36.665133  606886 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:11:36.665138  606886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:11:36.665163  606886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:11:36.665219  606886 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:11:36.665223  606886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:11:36.665247  606886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:11:36.665292  606886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-670275 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-670275]
	I1013 23:11:37.536823  606886 provision.go:177] copyRemoteCerts
	I1013 23:11:37.536898  606886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:11:37.536944  606886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:11:37.555374  606886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:11:37.658711  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:11:37.677756  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:11:37.697015  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1013 23:11:37.714702  606886 provision.go:87] duration metric: took 1.067284348s to configureAuth
	I1013 23:11:37.714771  606886 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:11:37.714989  606886 config.go:182] Loaded profile config "old-k8s-version-670275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 23:11:37.715211  606886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:11:37.733729  606886 main.go:141] libmachine: Using SSH client type: native
	I1013 23:11:37.734037  606886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1013 23:11:37.734051  606886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:11:37.994087  606886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:11:37.994112  606886 machine.go:96] duration metric: took 4.851371316s to provisionDockerMachine
	I1013 23:11:37.994123  606886 client.go:171] duration metric: took 11.592005775s to LocalClient.Create
	I1013 23:11:37.994137  606886 start.go:167] duration metric: took 11.592084469s to libmachine.API.Create "old-k8s-version-670275"
	I1013 23:11:37.994144  606886 start.go:293] postStartSetup for "old-k8s-version-670275" (driver="docker")
	I1013 23:11:37.994159  606886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:11:37.994233  606886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:11:37.994290  606886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:11:38.022701  606886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:11:38.127453  606886 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:11:38.131130  606886 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:11:38.131161  606886 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:11:38.131173  606886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:11:38.131272  606886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:11:38.131408  606886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:11:38.131530  606886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:11:38.139229  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:11:38.156731  606886 start.go:296] duration metric: took 162.560216ms for postStartSetup
	I1013 23:11:38.157169  606886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670275
	I1013 23:11:38.179778  606886 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/config.json ...
	I1013 23:11:38.180087  606886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:11:38.180138  606886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:11:38.196998  606886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:11:38.295925  606886 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:11:38.300328  606886 start.go:128] duration metric: took 11.901899192s to createHost
	I1013 23:11:38.300354  606886 start.go:83] releasing machines lock for "old-k8s-version-670275", held for 11.902038635s
	I1013 23:11:38.300424  606886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670275
	I1013 23:11:38.322364  606886 ssh_runner.go:195] Run: cat /version.json
	I1013 23:11:38.322397  606886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:11:38.322427  606886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:11:38.322464  606886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:11:38.345357  606886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:11:38.363993  606886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:11:38.540579  606886 ssh_runner.go:195] Run: systemctl --version
	I1013 23:11:38.547045  606886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:11:38.585838  606886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:11:38.590382  606886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:11:38.590456  606886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:11:38.621462  606886 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 23:11:38.621532  606886 start.go:495] detecting cgroup driver to use...
	I1013 23:11:38.621580  606886 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:11:38.621667  606886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:11:38.639602  606886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:11:38.652195  606886 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:11:38.652258  606886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:11:38.669873  606886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:11:38.690868  606886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:11:38.805797  606886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:11:38.962947  606886 docker.go:234] disabling docker service ...
	I1013 23:11:38.963016  606886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:11:38.986418  606886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:11:39.002717  606886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:11:39.130748  606886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:11:39.247134  606886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:11:39.261623  606886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:11:39.276684  606886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1013 23:11:39.276753  606886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:11:39.286377  606886 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:11:39.286453  606886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:11:39.295652  606886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:11:39.305812  606886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:11:39.314986  606886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:11:39.324137  606886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:11:39.333362  606886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:11:39.348406  606886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:11:39.357442  606886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:11:39.364961  606886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:11:39.372356  606886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:11:39.488222  606886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:11:39.620314  606886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:11:39.620448  606886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:11:39.624624  606886 start.go:563] Will wait 60s for crictl version
	I1013 23:11:39.624724  606886 ssh_runner.go:195] Run: which crictl
	I1013 23:11:39.628773  606886 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:11:39.653067  606886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:11:39.653170  606886 ssh_runner.go:195] Run: crio --version
	I1013 23:11:39.687697  606886 ssh_runner.go:195] Run: crio --version
	I1013 23:11:39.724793  606886 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1013 23:11:39.727710  606886 cli_runner.go:164] Run: docker network inspect old-k8s-version-670275 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:11:39.744845  606886 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 23:11:39.748838  606886 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:11:39.758995  606886 kubeadm.go:883] updating cluster {Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:11:39.759152  606886 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 23:11:39.759213  606886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:11:39.790808  606886 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:11:39.790835  606886 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:11:39.790894  606886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:11:39.818377  606886 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:11:39.818399  606886 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:11:39.818407  606886 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1013 23:11:39.818501  606886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-670275 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:11:39.818593  606886 ssh_runner.go:195] Run: crio config
	I1013 23:11:39.898914  606886 cni.go:84] Creating CNI manager for ""
	I1013 23:11:39.898938  606886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:11:39.898961  606886 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:11:39.898985  606886 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-670275 NodeName:old-k8s-version-670275 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:11:39.899158  606886 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-670275"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:11:39.899235  606886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1013 23:11:39.908774  606886 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:11:39.908846  606886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:11:39.918023  606886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1013 23:11:39.931102  606886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:11:39.946378  606886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1013 23:11:39.964238  606886 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:11:39.968087  606886 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:11:39.978285  606886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:11:40.117346  606886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:11:40.134049  606886 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275 for IP: 192.168.85.2
	I1013 23:11:40.134073  606886 certs.go:195] generating shared ca certs ...
	I1013 23:11:40.134092  606886 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:11:40.134294  606886 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:11:40.134355  606886 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:11:40.134370  606886 certs.go:257] generating profile certs ...
	I1013 23:11:40.134439  606886 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.key
	I1013 23:11:40.134457  606886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt with IP's: []
	I1013 23:11:41.398101  606886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt ...
	I1013 23:11:41.398137  606886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: {Name:mkbcb85e2f9cfa7c449c06784fc1b9e07942fea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:11:41.398346  606886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.key ...
	I1013 23:11:41.398367  606886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.key: {Name:mk3594a9497fb269ec501afc6c5bdf744066fcc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:11:41.398463  606886 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.key.d7f6a84a
	I1013 23:11:41.398483  606886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.crt.d7f6a84a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 23:11:41.715423  606886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.crt.d7f6a84a ...
	I1013 23:11:41.715454  606886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.crt.d7f6a84a: {Name:mkf4abcb8619dc6355561f7272c1d510cf360621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:11:41.715635  606886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.key.d7f6a84a ...
	I1013 23:11:41.715652  606886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.key.d7f6a84a: {Name:mk224a09a986caab47d37d8adbba85a504ec2ec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:11:41.715735  606886 certs.go:382] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.crt.d7f6a84a -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.crt
	I1013 23:11:41.715824  606886 certs.go:386] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.key.d7f6a84a -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.key
	I1013 23:11:41.715888  606886 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.key
	I1013 23:11:41.715908  606886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.crt with IP's: []
	I1013 23:11:42.829414  606886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.crt ...
	I1013 23:11:42.829444  606886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.crt: {Name:mkbda264f36e9354a81349a42ff3bdc5ab8a5eaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:11:42.829621  606886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.key ...
	I1013 23:11:42.829643  606886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.key: {Name:mkdeff63a67df0c36f0f9cbf847d2ae020472ca8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:11:42.829841  606886 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:11:42.829886  606886 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:11:42.829900  606886 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:11:42.829931  606886 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:11:42.829960  606886 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:11:42.829985  606886 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:11:42.830031  606886 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:11:42.830649  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:11:42.855980  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:11:42.875544  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:11:42.894018  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:11:42.914296  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1013 23:11:42.933151  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 23:11:42.952316  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:11:42.970589  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 23:11:42.988728  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:11:43.009949  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:11:43.028417  606886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:11:43.046103  606886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:11:43.059787  606886 ssh_runner.go:195] Run: openssl version
	I1013 23:11:43.066093  606886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:11:43.074623  606886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:11:43.078620  606886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:11:43.078765  606886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:11:43.120150  606886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:11:43.128713  606886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:11:43.137034  606886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:11:43.140685  606886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:11:43.140749  606886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:11:43.181855  606886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:11:43.190489  606886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:11:43.198712  606886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:11:43.202653  606886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:11:43.202730  606886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:11:43.246995  606886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:11:43.255255  606886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:11:43.259373  606886 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 23:11:43.259426  606886 kubeadm.go:400] StartCluster: {Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:11:43.259496  606886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:11:43.259562  606886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:11:43.287347  606886 cri.go:89] found id: ""
	I1013 23:11:43.287449  606886 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:11:43.296680  606886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 23:11:43.304518  606886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 23:11:43.304591  606886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 23:11:43.314340  606886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 23:11:43.314367  606886 kubeadm.go:157] found existing configuration files:
	
	I1013 23:11:43.314421  606886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 23:11:43.322292  606886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 23:11:43.322404  606886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 23:11:43.329612  606886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 23:11:43.337416  606886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 23:11:43.337480  606886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 23:11:43.344930  606886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 23:11:43.352892  606886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 23:11:43.352969  606886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 23:11:43.360933  606886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 23:11:43.368885  606886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 23:11:43.368981  606886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 23:11:43.376223  606886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 23:11:43.426480  606886 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1013 23:11:43.426577  606886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 23:11:43.465287  606886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 23:11:43.465366  606886 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 23:11:43.465406  606886 kubeadm.go:318] OS: Linux
	I1013 23:11:43.465458  606886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 23:11:43.465512  606886 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 23:11:43.465565  606886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 23:11:43.465619  606886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 23:11:43.465672  606886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 23:11:43.465726  606886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 23:11:43.465777  606886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 23:11:43.465830  606886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 23:11:43.465882  606886 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 23:11:43.546084  606886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 23:11:43.546314  606886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 23:11:43.546472  606886 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1013 23:11:43.693132  606886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 23:11:43.697541  606886 out.go:252]   - Generating certificates and keys ...
	I1013 23:11:43.697664  606886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 23:11:43.698364  606886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 23:11:44.163844  606886 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 23:11:44.926197  606886 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 23:11:45.285138  606886 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 23:11:45.593262  606886 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 23:11:45.923372  606886 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 23:11:45.923708  606886 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-670275] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 23:11:46.821903  606886 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 23:11:46.822229  606886 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-670275] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 23:11:47.200567  606886 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 23:11:47.807880  606886 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 23:11:48.561695  606886 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 23:11:48.561906  606886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 23:11:49.518523  606886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 23:11:49.904914  606886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 23:11:50.290017  606886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 23:11:50.616694  606886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 23:11:50.617574  606886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 23:11:50.620383  606886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 23:11:50.624200  606886 out.go:252]   - Booting up control plane ...
	I1013 23:11:50.624329  606886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 23:11:50.624415  606886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 23:11:50.624500  606886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 23:11:50.639637  606886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 23:11:50.640731  606886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 23:11:50.640786  606886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 23:11:50.783369  606886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1013 23:11:57.286440  606886 kubeadm.go:318] [apiclient] All control plane components are healthy after 6.503462 seconds
	I1013 23:11:57.286582  606886 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 23:11:57.310373  606886 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 23:11:57.850828  606886 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 23:11:57.851344  606886 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-670275 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 23:11:58.371513  606886 kubeadm.go:318] [bootstrap-token] Using token: yh10fd.tesryzm37gj6xb64
	I1013 23:11:58.374474  606886 out.go:252]   - Configuring RBAC rules ...
	I1013 23:11:58.374607  606886 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 23:11:58.379845  606886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 23:11:58.389191  606886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 23:11:58.395687  606886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 23:11:58.399594  606886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 23:11:58.405854  606886 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 23:11:58.427598  606886 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 23:11:58.715905  606886 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 23:11:58.790592  606886 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 23:11:58.792636  606886 kubeadm.go:318] 
	I1013 23:11:58.792726  606886 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 23:11:58.792733  606886 kubeadm.go:318] 
	I1013 23:11:58.792826  606886 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 23:11:58.792832  606886 kubeadm.go:318] 
	I1013 23:11:58.792866  606886 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 23:11:58.793467  606886 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 23:11:58.793535  606886 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 23:11:58.793560  606886 kubeadm.go:318] 
	I1013 23:11:58.793627  606886 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 23:11:58.793632  606886 kubeadm.go:318] 
	I1013 23:11:58.793686  606886 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 23:11:58.793691  606886 kubeadm.go:318] 
	I1013 23:11:58.793754  606886 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 23:11:58.793850  606886 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 23:11:58.793940  606886 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 23:11:58.793945  606886 kubeadm.go:318] 
	I1013 23:11:58.794340  606886 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 23:11:58.794439  606886 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 23:11:58.794445  606886 kubeadm.go:318] 
	I1013 23:11:58.794843  606886 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yh10fd.tesryzm37gj6xb64 \
	I1013 23:11:58.794982  606886 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 \
	I1013 23:11:58.795339  606886 kubeadm.go:318] 	--control-plane 
	I1013 23:11:58.795351  606886 kubeadm.go:318] 
	I1013 23:11:58.795828  606886 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 23:11:58.795904  606886 kubeadm.go:318] 
	I1013 23:11:58.796229  606886 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yh10fd.tesryzm37gj6xb64 \
	I1013 23:11:58.803812  606886 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 
	I1013 23:11:58.811721  606886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 23:11:58.811861  606886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 23:11:58.811880  606886 cni.go:84] Creating CNI manager for ""
	I1013 23:11:58.811888  606886 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:11:58.816708  606886 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 23:11:58.819907  606886 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 23:11:58.841167  606886 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1013 23:11:58.841192  606886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 23:11:58.887122  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 23:11:59.898402  606886 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.011174882s)
	I1013 23:11:59.898439  606886 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 23:11:59.898578  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:11:59.898660  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-670275 minikube.k8s.io/updated_at=2025_10_13T23_11_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=old-k8s-version-670275 minikube.k8s.io/primary=true
	I1013 23:12:00.241525  606886 ops.go:34] apiserver oom_adj: -16
	I1013 23:12:00.241690  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:00.741838  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:01.241713  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:01.741859  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:02.241972  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:02.742276  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:03.242579  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:03.741856  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:04.241767  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:04.741707  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:05.242140  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:05.742533  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:06.242717  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:06.741877  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:07.242540  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:07.742324  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:08.242020  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:08.742488  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:09.242563  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:09.741773  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:10.242033  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:10.741817  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:11.242147  606886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:12:11.344701  606886 kubeadm.go:1113] duration metric: took 11.446167192s to wait for elevateKubeSystemPrivileges
	I1013 23:12:11.344746  606886 kubeadm.go:402] duration metric: took 28.085325844s to StartCluster
	I1013 23:12:11.344765  606886 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:12:11.344843  606886 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:12:11.345950  606886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:12:11.346178  606886 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:12:11.346291  606886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 23:12:11.346572  606886 config.go:182] Loaded profile config "old-k8s-version-670275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 23:12:11.346693  606886 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:12:11.346790  606886 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-670275"
	I1013 23:12:11.346809  606886 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-670275"
	I1013 23:12:11.346837  606886 host.go:66] Checking if "old-k8s-version-670275" exists ...
	I1013 23:12:11.347639  606886 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:12:11.347984  606886 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-670275"
	I1013 23:12:11.348007  606886 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-670275"
	I1013 23:12:11.348281  606886 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:12:11.359036  606886 out.go:179] * Verifying Kubernetes components...
	I1013 23:12:11.367933  606886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:12:11.401950  606886 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:12:11.404564  606886 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-670275"
	I1013 23:12:11.404680  606886 host.go:66] Checking if "old-k8s-version-670275" exists ...
	I1013 23:12:11.405159  606886 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:12:11.406667  606886 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:12:11.406691  606886 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:12:11.406749  606886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:12:11.443220  606886 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:12:11.443247  606886 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:12:11.443312  606886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:12:11.451242  606886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:12:11.470942  606886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:12:11.711340  606886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 23:12:11.753982  606886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:12:11.785706  606886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:12:11.823818  606886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:12:12.321943  606886 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1013 23:12:12.323909  606886 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-670275" to be "Ready" ...
	I1013 23:12:12.686425  606886 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 23:12:12.689341  606886 addons.go:514] duration metric: took 1.342619298s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 23:12:12.828162  606886 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-670275" context rescaled to 1 replicas
	W1013 23:12:14.328099  606886 node_ready.go:57] node "old-k8s-version-670275" has "Ready":"False" status (will retry)
	W1013 23:12:16.828153  606886 node_ready.go:57] node "old-k8s-version-670275" has "Ready":"False" status (will retry)
	W1013 23:12:19.328057  606886 node_ready.go:57] node "old-k8s-version-670275" has "Ready":"False" status (will retry)
	W1013 23:12:21.328136  606886 node_ready.go:57] node "old-k8s-version-670275" has "Ready":"False" status (will retry)
	W1013 23:12:23.827628  606886 node_ready.go:57] node "old-k8s-version-670275" has "Ready":"False" status (will retry)
	W1013 23:12:26.327704  606886 node_ready.go:57] node "old-k8s-version-670275" has "Ready":"False" status (will retry)
	W1013 23:12:28.328433  606886 node_ready.go:57] node "old-k8s-version-670275" has "Ready":"False" status (will retry)
	W1013 23:12:30.828012  606886 node_ready.go:57] node "old-k8s-version-670275" has "Ready":"False" status (will retry)
	I1013 23:12:33.330073  606886 node_ready.go:49] node "old-k8s-version-670275" is "Ready"
	I1013 23:12:33.330099  606886 node_ready.go:38] duration metric: took 21.005891252s for node "old-k8s-version-670275" to be "Ready" ...
	I1013 23:12:33.330112  606886 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:12:33.330175  606886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:12:33.346472  606886 api_server.go:72] duration metric: took 22.000250085s to wait for apiserver process to appear ...
	I1013 23:12:33.346497  606886 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:12:33.346515  606886 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:12:33.360643  606886 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 23:12:33.361981  606886 api_server.go:141] control plane version: v1.28.0
	I1013 23:12:33.362003  606886 api_server.go:131] duration metric: took 15.499364ms to wait for apiserver health ...
	I1013 23:12:33.362012  606886 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:12:33.366164  606886 system_pods.go:59] 8 kube-system pods found
	I1013 23:12:33.366198  606886 system_pods.go:61] "coredns-5dd5756b68-9jcbh" [d7fa11f6-6bdd-48d6-b326-81f138997784] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:12:33.366207  606886 system_pods.go:61] "etcd-old-k8s-version-670275" [44443f75-b1be-438c-aee5-66c09080a824] Running
	I1013 23:12:33.366213  606886 system_pods.go:61] "kindnet-c6xtc" [21a63e23-ce36-4981-bad3-f1386b824908] Running
	I1013 23:12:33.366217  606886 system_pods.go:61] "kube-apiserver-old-k8s-version-670275" [aa8be998-0f28-48dd-b963-816385645b33] Running
	I1013 23:12:33.366222  606886 system_pods.go:61] "kube-controller-manager-old-k8s-version-670275" [2e068ad0-7a95-459c-af4e-15d9bf83c071] Running
	I1013 23:12:33.366227  606886 system_pods.go:61] "kube-proxy-2ph29" [95e536a5-7221-4e6f-9c1f-64f77071018a] Running
	I1013 23:12:33.366231  606886 system_pods.go:61] "kube-scheduler-old-k8s-version-670275" [d2014170-66a8-448a-b8ba-0cb3bb12c612] Running
	I1013 23:12:33.366237  606886 system_pods.go:61] "storage-provisioner" [bf19903c-00c6-4ccc-b9fd-6b0a36356658] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:12:33.366243  606886 system_pods.go:74] duration metric: took 4.225357ms to wait for pod list to return data ...
	I1013 23:12:33.366251  606886 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:12:33.368795  606886 default_sa.go:45] found service account: "default"
	I1013 23:12:33.368867  606886 default_sa.go:55] duration metric: took 2.599954ms for default service account to be created ...
	I1013 23:12:33.368892  606886 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:12:33.372943  606886 system_pods.go:86] 8 kube-system pods found
	I1013 23:12:33.373022  606886 system_pods.go:89] "coredns-5dd5756b68-9jcbh" [d7fa11f6-6bdd-48d6-b326-81f138997784] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:12:33.373036  606886 system_pods.go:89] "etcd-old-k8s-version-670275" [44443f75-b1be-438c-aee5-66c09080a824] Running
	I1013 23:12:33.373043  606886 system_pods.go:89] "kindnet-c6xtc" [21a63e23-ce36-4981-bad3-f1386b824908] Running
	I1013 23:12:33.373048  606886 system_pods.go:89] "kube-apiserver-old-k8s-version-670275" [aa8be998-0f28-48dd-b963-816385645b33] Running
	I1013 23:12:33.373053  606886 system_pods.go:89] "kube-controller-manager-old-k8s-version-670275" [2e068ad0-7a95-459c-af4e-15d9bf83c071] Running
	I1013 23:12:33.373057  606886 system_pods.go:89] "kube-proxy-2ph29" [95e536a5-7221-4e6f-9c1f-64f77071018a] Running
	I1013 23:12:33.373061  606886 system_pods.go:89] "kube-scheduler-old-k8s-version-670275" [d2014170-66a8-448a-b8ba-0cb3bb12c612] Running
	I1013 23:12:33.373067  606886 system_pods.go:89] "storage-provisioner" [bf19903c-00c6-4ccc-b9fd-6b0a36356658] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:12:33.373093  606886 retry.go:31] will retry after 306.398816ms: missing components: kube-dns
	I1013 23:12:33.684730  606886 system_pods.go:86] 8 kube-system pods found
	I1013 23:12:33.684766  606886 system_pods.go:89] "coredns-5dd5756b68-9jcbh" [d7fa11f6-6bdd-48d6-b326-81f138997784] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:12:33.684775  606886 system_pods.go:89] "etcd-old-k8s-version-670275" [44443f75-b1be-438c-aee5-66c09080a824] Running
	I1013 23:12:33.684781  606886 system_pods.go:89] "kindnet-c6xtc" [21a63e23-ce36-4981-bad3-f1386b824908] Running
	I1013 23:12:33.684807  606886 system_pods.go:89] "kube-apiserver-old-k8s-version-670275" [aa8be998-0f28-48dd-b963-816385645b33] Running
	I1013 23:12:33.684822  606886 system_pods.go:89] "kube-controller-manager-old-k8s-version-670275" [2e068ad0-7a95-459c-af4e-15d9bf83c071] Running
	I1013 23:12:33.684827  606886 system_pods.go:89] "kube-proxy-2ph29" [95e536a5-7221-4e6f-9c1f-64f77071018a] Running
	I1013 23:12:33.684832  606886 system_pods.go:89] "kube-scheduler-old-k8s-version-670275" [d2014170-66a8-448a-b8ba-0cb3bb12c612] Running
	I1013 23:12:33.684847  606886 system_pods.go:89] "storage-provisioner" [bf19903c-00c6-4ccc-b9fd-6b0a36356658] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:12:33.684864  606886 retry.go:31] will retry after 297.177387ms: missing components: kube-dns
	I1013 23:12:33.989016  606886 system_pods.go:86] 8 kube-system pods found
	I1013 23:12:33.989055  606886 system_pods.go:89] "coredns-5dd5756b68-9jcbh" [d7fa11f6-6bdd-48d6-b326-81f138997784] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:12:33.989064  606886 system_pods.go:89] "etcd-old-k8s-version-670275" [44443f75-b1be-438c-aee5-66c09080a824] Running
	I1013 23:12:33.989071  606886 system_pods.go:89] "kindnet-c6xtc" [21a63e23-ce36-4981-bad3-f1386b824908] Running
	I1013 23:12:33.989077  606886 system_pods.go:89] "kube-apiserver-old-k8s-version-670275" [aa8be998-0f28-48dd-b963-816385645b33] Running
	I1013 23:12:33.989085  606886 system_pods.go:89] "kube-controller-manager-old-k8s-version-670275" [2e068ad0-7a95-459c-af4e-15d9bf83c071] Running
	I1013 23:12:33.989089  606886 system_pods.go:89] "kube-proxy-2ph29" [95e536a5-7221-4e6f-9c1f-64f77071018a] Running
	I1013 23:12:33.989094  606886 system_pods.go:89] "kube-scheduler-old-k8s-version-670275" [d2014170-66a8-448a-b8ba-0cb3bb12c612] Running
	I1013 23:12:33.989106  606886 system_pods.go:89] "storage-provisioner" [bf19903c-00c6-4ccc-b9fd-6b0a36356658] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:12:33.989121  606886 retry.go:31] will retry after 351.328657ms: missing components: kube-dns
	I1013 23:12:34.345343  606886 system_pods.go:86] 8 kube-system pods found
	I1013 23:12:34.345383  606886 system_pods.go:89] "coredns-5dd5756b68-9jcbh" [d7fa11f6-6bdd-48d6-b326-81f138997784] Running
	I1013 23:12:34.345390  606886 system_pods.go:89] "etcd-old-k8s-version-670275" [44443f75-b1be-438c-aee5-66c09080a824] Running
	I1013 23:12:34.345396  606886 system_pods.go:89] "kindnet-c6xtc" [21a63e23-ce36-4981-bad3-f1386b824908] Running
	I1013 23:12:34.345400  606886 system_pods.go:89] "kube-apiserver-old-k8s-version-670275" [aa8be998-0f28-48dd-b963-816385645b33] Running
	I1013 23:12:34.345409  606886 system_pods.go:89] "kube-controller-manager-old-k8s-version-670275" [2e068ad0-7a95-459c-af4e-15d9bf83c071] Running
	I1013 23:12:34.345414  606886 system_pods.go:89] "kube-proxy-2ph29" [95e536a5-7221-4e6f-9c1f-64f77071018a] Running
	I1013 23:12:34.345419  606886 system_pods.go:89] "kube-scheduler-old-k8s-version-670275" [d2014170-66a8-448a-b8ba-0cb3bb12c612] Running
	I1013 23:12:34.345423  606886 system_pods.go:89] "storage-provisioner" [bf19903c-00c6-4ccc-b9fd-6b0a36356658] Running
	I1013 23:12:34.345473  606886 system_pods.go:126] duration metric: took 976.559608ms to wait for k8s-apps to be running ...
	I1013 23:12:34.345496  606886 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:12:34.345590  606886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:12:34.358581  606886 system_svc.go:56] duration metric: took 13.076194ms WaitForService to wait for kubelet
	I1013 23:12:34.358666  606886 kubeadm.go:586] duration metric: took 23.012448144s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:12:34.358691  606886 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:12:34.361558  606886 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:12:34.361596  606886 node_conditions.go:123] node cpu capacity is 2
	I1013 23:12:34.361610  606886 node_conditions.go:105] duration metric: took 2.912982ms to run NodePressure ...
	I1013 23:12:34.361622  606886 start.go:241] waiting for startup goroutines ...
	I1013 23:12:34.361630  606886 start.go:246] waiting for cluster config update ...
	I1013 23:12:34.361647  606886 start.go:255] writing updated cluster config ...
	I1013 23:12:34.361949  606886 ssh_runner.go:195] Run: rm -f paused
	I1013 23:12:34.365830  606886 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:12:34.370352  606886 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-9jcbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:34.375749  606886 pod_ready.go:94] pod "coredns-5dd5756b68-9jcbh" is "Ready"
	I1013 23:12:34.375776  606886 pod_ready.go:86] duration metric: took 5.396508ms for pod "coredns-5dd5756b68-9jcbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:34.380009  606886 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:34.385255  606886 pod_ready.go:94] pod "etcd-old-k8s-version-670275" is "Ready"
	I1013 23:12:34.385284  606886 pod_ready.go:86] duration metric: took 5.247555ms for pod "etcd-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:34.389733  606886 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:34.398062  606886 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-670275" is "Ready"
	I1013 23:12:34.398091  606886 pod_ready.go:86] duration metric: took 8.329166ms for pod "kube-apiserver-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:34.401112  606886 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:34.770808  606886 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-670275" is "Ready"
	I1013 23:12:34.770838  606886 pod_ready.go:86] duration metric: took 369.699223ms for pod "kube-controller-manager-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:34.970861  606886 pod_ready.go:83] waiting for pod "kube-proxy-2ph29" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:35.370529  606886 pod_ready.go:94] pod "kube-proxy-2ph29" is "Ready"
	I1013 23:12:35.370560  606886 pod_ready.go:86] duration metric: took 399.670462ms for pod "kube-proxy-2ph29" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:35.570738  606886 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:35.970694  606886 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-670275" is "Ready"
	I1013 23:12:35.970725  606886 pod_ready.go:86] duration metric: took 399.957988ms for pod "kube-scheduler-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:12:35.970740  606886 pod_ready.go:40] duration metric: took 1.604877221s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:12:36.045393  606886 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1013 23:12:36.049694  606886 out.go:203] 
	W1013 23:12:36.058085  606886 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1013 23:12:36.062927  606886 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1013 23:12:36.066069  606886 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-670275" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 23:12:33 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:33.532753763Z" level=info msg="Created container fbc7211c7b6fad4e4ead8e72c8008917a375a5296898f91b71ce341a608079d7: kube-system/coredns-5dd5756b68-9jcbh/coredns" id=588bbaf3-7cda-49ce-9ab6-1720e83af43d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:12:33 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:33.533727568Z" level=info msg="Starting container: fbc7211c7b6fad4e4ead8e72c8008917a375a5296898f91b71ce341a608079d7" id=4c590f7c-b06b-43b9-b3e9-32c4ee254ebb name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:12:33 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:33.536634208Z" level=info msg="Started container" PID=1946 containerID=fbc7211c7b6fad4e4ead8e72c8008917a375a5296898f91b71ce341a608079d7 description=kube-system/coredns-5dd5756b68-9jcbh/coredns id=4c590f7c-b06b-43b9-b3e9-32c4ee254ebb name=/runtime.v1.RuntimeService/StartContainer sandboxID=717c8559aca4775ac628eec0df1020a69ceef20a282a509aa160dee7ba90e921
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.584243904Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9b778177-ac45-4b6c-8f39-060d8fcfd8ef name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.584315197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.589701703Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:65f4d790ad40dbae9b78f95d98965a6f52de3d4586da1e243120b8380ac624f6 UID:4c719ad8-a8c2-4e6e-8edd-4d24c1c9eba0 NetNS:/var/run/netns/3a7b93a7-e4ed-40b0-86e0-18b5ccbd2c88 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000135f38}] Aliases:map[]}"
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.589852371Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.600741951Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:65f4d790ad40dbae9b78f95d98965a6f52de3d4586da1e243120b8380ac624f6 UID:4c719ad8-a8c2-4e6e-8edd-4d24c1c9eba0 NetNS:/var/run/netns/3a7b93a7-e4ed-40b0-86e0-18b5ccbd2c88 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000135f38}] Aliases:map[]}"
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.601115203Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.60409709Z" level=info msg="Ran pod sandbox 65f4d790ad40dbae9b78f95d98965a6f52de3d4586da1e243120b8380ac624f6 with infra container: default/busybox/POD" id=9b778177-ac45-4b6c-8f39-060d8fcfd8ef name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.607398388Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b5877c26-65a1-41e2-97fc-78aee7bdf342 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.607525409Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b5877c26-65a1-41e2-97fc-78aee7bdf342 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.607570528Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b5877c26-65a1-41e2-97fc-78aee7bdf342 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.609065545Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9acf0380-5425-4bf4-947f-401e72701c9d name=/runtime.v1.ImageService/PullImage
	Oct 13 23:12:36 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:36.612560111Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 23:12:38 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:38.885241001Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9acf0380-5425-4bf4-947f-401e72701c9d name=/runtime.v1.ImageService/PullImage
	Oct 13 23:12:38 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:38.886520047Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ca64531c-f547-43cf-a3a0-6fd3286d3ade name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:12:38 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:38.892803601Z" level=info msg="Creating container: default/busybox/busybox" id=af6249ac-033b-4c69-a2ba-3019c3aaab0c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:12:38 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:38.89502991Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:12:38 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:38.900127142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:12:38 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:38.900858286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:12:38 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:38.917470612Z" level=info msg="Created container 2277d56faff78fa7d1b38ee4acd7ce17fc8655d3fe73204693a21017afd61497: default/busybox/busybox" id=af6249ac-033b-4c69-a2ba-3019c3aaab0c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:12:38 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:38.918374126Z" level=info msg="Starting container: 2277d56faff78fa7d1b38ee4acd7ce17fc8655d3fe73204693a21017afd61497" id=8aee44d0-dedb-4a73-8bc4-72fcbed763e4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:12:38 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:38.922366519Z" level=info msg="Started container" PID=2001 containerID=2277d56faff78fa7d1b38ee4acd7ce17fc8655d3fe73204693a21017afd61497 description=default/busybox/busybox id=8aee44d0-dedb-4a73-8bc4-72fcbed763e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=65f4d790ad40dbae9b78f95d98965a6f52de3d4586da1e243120b8380ac624f6
	Oct 13 23:12:45 old-k8s-version-670275 crio[836]: time="2025-10-13T23:12:45.508099624Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	2277d56faff78       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   65f4d790ad40d       busybox                                          default
	fbc7211c7b6fa       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   717c8559aca47       coredns-5dd5756b68-9jcbh                         kube-system
	3959c2b15bd02       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   5dbbed2f88083       storage-provisioner                              kube-system
	06ad02acf778c       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   29d2f8cca72ec       kindnet-c6xtc                                    kube-system
	96d2bdbde407f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      34 seconds ago      Running             kube-proxy                0                   d170d4016e135       kube-proxy-2ph29                                 kube-system
	11c18d4911b04       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      54 seconds ago      Running             kube-scheduler            0                   a4b3a846ba49a       kube-scheduler-old-k8s-version-670275            kube-system
	3454acb2345ea       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      54 seconds ago      Running             kube-controller-manager   0                   9db21b2321a9e       kube-controller-manager-old-k8s-version-670275   kube-system
	2a0635c94d1b4       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      54 seconds ago      Running             kube-apiserver            0                   9dc1e8d513d3e       kube-apiserver-old-k8s-version-670275            kube-system
	82c6f8d93d141       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      54 seconds ago      Running             etcd                      0                   20ced6113acf2       etcd-old-k8s-version-670275                      kube-system
	
	
	==> coredns [fbc7211c7b6fad4e4ead8e72c8008917a375a5296898f91b71ce341a608079d7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49716 - 38639 "HINFO IN 7506607482216879016.8330665787430616277. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013272521s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-670275
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-670275
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=old-k8s-version-670275
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_11_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:11:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-670275
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:12:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:12:33 +0000   Mon, 13 Oct 2025 23:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:12:33 +0000   Mon, 13 Oct 2025 23:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:12:33 +0000   Mon, 13 Oct 2025 23:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:12:33 +0000   Mon, 13 Oct 2025 23:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-670275
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                3228e95e-3de7-463c-a3f6-be9dbc04be1a
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-9jcbh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     36s
	  kube-system                 etcd-old-k8s-version-670275                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         48s
	  kube-system                 kindnet-c6xtc                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      36s
	  kube-system                 kube-apiserver-old-k8s-version-670275             250m (12%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-controller-manager-old-k8s-version-670275    200m (10%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-proxy-2ph29                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-scheduler-old-k8s-version-670275             100m (5%)     0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 34s   kube-proxy       
	  Normal  Starting                 49s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s   kubelet          Node old-k8s-version-670275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s   kubelet          Node old-k8s-version-670275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s   kubelet          Node old-k8s-version-670275 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s   node-controller  Node old-k8s-version-670275 event: Registered Node old-k8s-version-670275 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-670275 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct13 22:44] overlayfs: idmapped layers are currently not supported
	[Oct13 22:45] overlayfs: idmapped layers are currently not supported
	[Oct13 22:50] overlayfs: idmapped layers are currently not supported
	[Oct13 22:51] overlayfs: idmapped layers are currently not supported
	[Oct13 22:52] overlayfs: idmapped layers are currently not supported
	[Oct13 22:53] overlayfs: idmapped layers are currently not supported
	[Oct13 22:54] overlayfs: idmapped layers are currently not supported
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [82c6f8d93d14187d88f5b21f0e2a9d25ca3f8c3d16d6694ee8cea8d2ba42fabd] <==
	{"level":"info","ts":"2025-10-13T23:11:52.371748Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T23:11:52.371772Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T23:11:52.37178Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T23:11:52.372244Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T23:11:52.372259Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T23:11:52.37279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-13T23:11:52.372868Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-13T23:11:53.152657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-13T23:11:53.152764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-13T23:11:53.152804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-13T23:11:53.152852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-13T23:11:53.152888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-13T23:11:53.152932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-13T23:11:53.152965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-13T23:11:53.155322Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-670275 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T23:11:53.155539Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T23:11:53.155653Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T23:11:53.15694Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-13T23:11:53.157071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T23:11:53.158093Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-13T23:11:53.160159Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T23:11:53.160238Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-13T23:11:53.160341Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T23:11:53.160435Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T23:11:53.160491Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 23:12:47 up  2:55,  0 user,  load average: 2.65, 3.09, 2.47
	Linux old-k8s-version-670275 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [06ad02acf778ca45aeada20781110d293c0a04d1c554475752a1819b2f181f42] <==
	I1013 23:12:22.711687       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:12:22.711949       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:12:22.712074       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:12:22.712091       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:12:22.712105       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:12:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:12:23.006671       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:12:23.007769       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:12:23.008066       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:12:23.009411       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 23:12:23.303210       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:12:23.303268       1 metrics.go:72] Registering metrics
	I1013 23:12:23.303332       1 controller.go:711] "Syncing nftables rules"
	I1013 23:12:33.007883       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:12:33.007956       1 main.go:301] handling current node
	I1013 23:12:43.008876       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:12:43.008914       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2a0635c94d1b49879fa8de6489e5e13058f64d030651064da4537e1f0a3f29e0] <==
	I1013 23:11:55.756603       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1013 23:11:55.757314       1 controller.go:624] quota admission added evaluator for: namespaces
	I1013 23:11:55.757803       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1013 23:11:55.757869       1 aggregator.go:166] initial CRD sync complete...
	I1013 23:11:55.757899       1 autoregister_controller.go:141] Starting autoregister controller
	I1013 23:11:55.757926       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 23:11:55.757952       1 cache.go:39] Caches are synced for autoregister controller
	I1013 23:11:55.761804       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1013 23:11:55.761854       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1013 23:11:55.958951       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:11:56.463204       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 23:11:56.468270       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 23:11:56.468294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:11:57.144253       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:11:57.196454       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:11:57.305328       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 23:11:57.320458       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1013 23:11:57.321549       1 controller.go:624] quota admission added evaluator for: endpoints
	I1013 23:11:57.329039       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 23:11:57.684637       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1013 23:11:58.700594       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1013 23:11:58.714169       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 23:11:58.742706       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1013 23:12:10.997858       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1013 23:12:11.023725       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3454acb2345ea5b9d936177a8b13319e89914d42334f5322b85c0b82aea86e82] <==
	I1013 23:12:11.201879       1 taint_manager.go:211] "Sending events to api server"
	I1013 23:12:11.202092       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1013 23:12:11.202425       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-670275"
	I1013 23:12:11.202537       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1013 23:12:11.202833       1 event.go:307] "Event occurred" object="old-k8s-version-670275" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-670275 event: Registered Node old-k8s-version-670275 in Controller"
	I1013 23:12:11.553481       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 23:12:11.554276       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1013 23:12:11.556490       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 23:12:11.658384       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nh9mt"
	I1013 23:12:11.696373       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-9jcbh"
	I1013 23:12:11.743688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="705.684538ms"
	I1013 23:12:11.772216       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.484956ms"
	I1013 23:12:11.772329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.455µs"
	I1013 23:12:11.789881       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.181µs"
	I1013 23:12:12.385536       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1013 23:12:12.431395       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-nh9mt"
	I1013 23:12:12.443943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.450932ms"
	I1013 23:12:12.463889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.898263ms"
	I1013 23:12:12.493101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.164671ms"
	I1013 23:12:12.493294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.419µs"
	I1013 23:12:33.155892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.316µs"
	I1013 23:12:33.182315       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.446µs"
	I1013 23:12:34.152130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.246598ms"
	I1013 23:12:34.152499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.369µs"
	I1013 23:12:36.206378       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [96d2bdbde407f06964e5fdeda51003f01d0697a329942bb1946b82b28d62357e] <==
	I1013 23:12:12.947682       1 server_others.go:69] "Using iptables proxy"
	I1013 23:12:12.962307       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1013 23:12:12.987012       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:12:12.988786       1 server_others.go:152] "Using iptables Proxier"
	I1013 23:12:12.988819       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1013 23:12:12.988826       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1013 23:12:12.988860       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1013 23:12:12.989218       1 server.go:846] "Version info" version="v1.28.0"
	I1013 23:12:12.989289       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:12:12.989995       1 config.go:188] "Starting service config controller"
	I1013 23:12:12.990062       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1013 23:12:12.990118       1 config.go:97] "Starting endpoint slice config controller"
	I1013 23:12:12.990145       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1013 23:12:12.990788       1 config.go:315] "Starting node config controller"
	I1013 23:12:12.990858       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1013 23:12:13.091192       1 shared_informer.go:318] Caches are synced for service config
	I1013 23:12:13.091218       1 shared_informer.go:318] Caches are synced for node config
	I1013 23:12:13.091236       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11c18d4911b04794e662b5b93ebb5ecf18a68646ed865bc9f515adc4d6f8a08c] <==
	W1013 23:11:55.730229       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1013 23:11:55.730245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1013 23:11:55.730304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1013 23:11:55.730318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1013 23:11:55.730361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1013 23:11:55.730386       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1013 23:11:55.730502       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1013 23:11:55.730517       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 23:11:56.575415       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1013 23:11:56.575455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1013 23:11:56.617275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1013 23:11:56.617310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1013 23:11:56.741760       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1013 23:11:56.741905       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1013 23:11:56.771337       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1013 23:11:56.771392       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1013 23:11:56.795264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1013 23:11:56.795371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1013 23:11:56.816009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1013 23:11:56.816120       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1013 23:11:56.859056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1013 23:11:56.859205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1013 23:11:57.184271       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1013 23:11:57.184303       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1013 23:12:00.017539       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 13 23:12:11 old-k8s-version-670275 kubelet[1380]: I1013 23:12:11.150514    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95e536a5-7221-4e6f-9c1f-64f77071018a-lib-modules\") pod \"kube-proxy-2ph29\" (UID: \"95e536a5-7221-4e6f-9c1f-64f77071018a\") " pod="kube-system/kube-proxy-2ph29"
	Oct 13 23:12:11 old-k8s-version-670275 kubelet[1380]: I1013 23:12:11.150554    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21a63e23-ce36-4981-bad3-f1386b824908-xtables-lock\") pod \"kindnet-c6xtc\" (UID: \"21a63e23-ce36-4981-bad3-f1386b824908\") " pod="kube-system/kindnet-c6xtc"
	Oct 13 23:12:11 old-k8s-version-670275 kubelet[1380]: I1013 23:12:11.150587    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21a63e23-ce36-4981-bad3-f1386b824908-lib-modules\") pod \"kindnet-c6xtc\" (UID: \"21a63e23-ce36-4981-bad3-f1386b824908\") " pod="kube-system/kindnet-c6xtc"
	Oct 13 23:12:11 old-k8s-version-670275 kubelet[1380]: I1013 23:12:11.150634    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k528k\" (UniqueName: \"kubernetes.io/projected/21a63e23-ce36-4981-bad3-f1386b824908-kube-api-access-k528k\") pod \"kindnet-c6xtc\" (UID: \"21a63e23-ce36-4981-bad3-f1386b824908\") " pod="kube-system/kindnet-c6xtc"
	Oct 13 23:12:11 old-k8s-version-670275 kubelet[1380]: I1013 23:12:11.150658    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95e536a5-7221-4e6f-9c1f-64f77071018a-xtables-lock\") pod \"kube-proxy-2ph29\" (UID: \"95e536a5-7221-4e6f-9c1f-64f77071018a\") " pod="kube-system/kube-proxy-2ph29"
	Oct 13 23:12:11 old-k8s-version-670275 kubelet[1380]: I1013 23:12:11.150683    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8hrs\" (UniqueName: \"kubernetes.io/projected/95e536a5-7221-4e6f-9c1f-64f77071018a-kube-api-access-r8hrs\") pod \"kube-proxy-2ph29\" (UID: \"95e536a5-7221-4e6f-9c1f-64f77071018a\") " pod="kube-system/kube-proxy-2ph29"
	Oct 13 23:12:11 old-k8s-version-670275 kubelet[1380]: I1013 23:12:11.150722    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/95e536a5-7221-4e6f-9c1f-64f77071018a-kube-proxy\") pod \"kube-proxy-2ph29\" (UID: \"95e536a5-7221-4e6f-9c1f-64f77071018a\") " pod="kube-system/kube-proxy-2ph29"
	Oct 13 23:12:12 old-k8s-version-670275 kubelet[1380]: E1013 23:12:12.252843    1380 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 13 23:12:12 old-k8s-version-670275 kubelet[1380]: E1013 23:12:12.252948    1380 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/95e536a5-7221-4e6f-9c1f-64f77071018a-kube-proxy podName:95e536a5-7221-4e6f-9c1f-64f77071018a nodeName:}" failed. No retries permitted until 2025-10-13 23:12:12.752920337 +0000 UTC m=+14.093076495 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/95e536a5-7221-4e6f-9c1f-64f77071018a-kube-proxy") pod "kube-proxy-2ph29" (UID: "95e536a5-7221-4e6f-9c1f-64f77071018a") : failed to sync configmap cache: timed out waiting for the condition
	Oct 13 23:12:12 old-k8s-version-670275 kubelet[1380]: W1013 23:12:12.294862    1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/crio-29d2f8cca72ec8400d2ee3dc83727adc360bd17bdf6253ac889cfadfc6031b04 WatchSource:0}: Error finding container 29d2f8cca72ec8400d2ee3dc83727adc360bd17bdf6253ac889cfadfc6031b04: Status 404 returned error can't find the container with id 29d2f8cca72ec8400d2ee3dc83727adc360bd17bdf6253ac889cfadfc6031b04
	Oct 13 23:12:12 old-k8s-version-670275 kubelet[1380]: W1013 23:12:12.858044    1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/crio-d170d4016e13535ee966b339aedc893056ec559685e10985775cc8ab64c34360 WatchSource:0}: Error finding container d170d4016e13535ee966b339aedc893056ec559685e10985775cc8ab64c34360: Status 404 returned error can't find the container with id d170d4016e13535ee966b339aedc893056ec559685e10985775cc8ab64c34360
	Oct 13 23:12:13 old-k8s-version-670275 kubelet[1380]: I1013 23:12:13.078438    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2ph29" podStartSLOduration=2.078395493 podCreationTimestamp="2025-10-13 23:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:12:13.078291545 +0000 UTC m=+14.418447727" watchObservedRunningTime="2025-10-13 23:12:13.078395493 +0000 UTC m=+14.418551651"
	Oct 13 23:12:33 old-k8s-version-670275 kubelet[1380]: I1013 23:12:33.117325    1380 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 13 23:12:33 old-k8s-version-670275 kubelet[1380]: I1013 23:12:33.148251    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-c6xtc" podStartSLOduration=11.805521295 podCreationTimestamp="2025-10-13 23:12:11 +0000 UTC" firstStartedPulling="2025-10-13 23:12:12.309288859 +0000 UTC m=+13.649445017" lastFinishedPulling="2025-10-13 23:12:22.651977694 +0000 UTC m=+23.992133851" observedRunningTime="2025-10-13 23:12:23.102705209 +0000 UTC m=+24.442861375" watchObservedRunningTime="2025-10-13 23:12:33.148210129 +0000 UTC m=+34.488366295"
	Oct 13 23:12:33 old-k8s-version-670275 kubelet[1380]: I1013 23:12:33.148403    1380 topology_manager.go:215] "Topology Admit Handler" podUID="bf19903c-00c6-4ccc-b9fd-6b0a36356658" podNamespace="kube-system" podName="storage-provisioner"
	Oct 13 23:12:33 old-k8s-version-670275 kubelet[1380]: I1013 23:12:33.151483    1380 topology_manager.go:215] "Topology Admit Handler" podUID="d7fa11f6-6bdd-48d6-b326-81f138997784" podNamespace="kube-system" podName="coredns-5dd5756b68-9jcbh"
	Oct 13 23:12:33 old-k8s-version-670275 kubelet[1380]: I1013 23:12:33.220847    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7fa11f6-6bdd-48d6-b326-81f138997784-config-volume\") pod \"coredns-5dd5756b68-9jcbh\" (UID: \"d7fa11f6-6bdd-48d6-b326-81f138997784\") " pod="kube-system/coredns-5dd5756b68-9jcbh"
	Oct 13 23:12:33 old-k8s-version-670275 kubelet[1380]: I1013 23:12:33.220905    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bf19903c-00c6-4ccc-b9fd-6b0a36356658-tmp\") pod \"storage-provisioner\" (UID: \"bf19903c-00c6-4ccc-b9fd-6b0a36356658\") " pod="kube-system/storage-provisioner"
	Oct 13 23:12:33 old-k8s-version-670275 kubelet[1380]: I1013 23:12:33.220937    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26zfk\" (UniqueName: \"kubernetes.io/projected/bf19903c-00c6-4ccc-b9fd-6b0a36356658-kube-api-access-26zfk\") pod \"storage-provisioner\" (UID: \"bf19903c-00c6-4ccc-b9fd-6b0a36356658\") " pod="kube-system/storage-provisioner"
	Oct 13 23:12:33 old-k8s-version-670275 kubelet[1380]: I1013 23:12:33.220964    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdglg\" (UniqueName: \"kubernetes.io/projected/d7fa11f6-6bdd-48d6-b326-81f138997784-kube-api-access-rdglg\") pod \"coredns-5dd5756b68-9jcbh\" (UID: \"d7fa11f6-6bdd-48d6-b326-81f138997784\") " pod="kube-system/coredns-5dd5756b68-9jcbh"
	Oct 13 23:12:33 old-k8s-version-670275 kubelet[1380]: W1013 23:12:33.461074    1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/crio-5dbbed2f88083dc05c233fdbd76fb19e361ac775afad7df458d322b1054c3a7b WatchSource:0}: Error finding container 5dbbed2f88083dc05c233fdbd76fb19e361ac775afad7df458d322b1054c3a7b: Status 404 returned error can't find the container with id 5dbbed2f88083dc05c233fdbd76fb19e361ac775afad7df458d322b1054c3a7b
	Oct 13 23:12:34 old-k8s-version-670275 kubelet[1380]: I1013 23:12:34.138370    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=22.138326789 podCreationTimestamp="2025-10-13 23:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:12:34.124270341 +0000 UTC m=+35.464426507" watchObservedRunningTime="2025-10-13 23:12:34.138326789 +0000 UTC m=+35.478482955"
	Oct 13 23:12:36 old-k8s-version-670275 kubelet[1380]: I1013 23:12:36.282404    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-9jcbh" podStartSLOduration=25.282346437 podCreationTimestamp="2025-10-13 23:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:12:34.139632986 +0000 UTC m=+35.479789152" watchObservedRunningTime="2025-10-13 23:12:36.282346437 +0000 UTC m=+37.622502595"
	Oct 13 23:12:36 old-k8s-version-670275 kubelet[1380]: I1013 23:12:36.282747    1380 topology_manager.go:215] "Topology Admit Handler" podUID="4c719ad8-a8c2-4e6e-8edd-4d24c1c9eba0" podNamespace="default" podName="busybox"
	Oct 13 23:12:36 old-k8s-version-670275 kubelet[1380]: I1013 23:12:36.345463    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qlsj\" (UniqueName: \"kubernetes.io/projected/4c719ad8-a8c2-4e6e-8edd-4d24c1c9eba0-kube-api-access-7qlsj\") pod \"busybox\" (UID: \"4c719ad8-a8c2-4e6e-8edd-4d24c1c9eba0\") " pod="default/busybox"
	
	
	==> storage-provisioner [3959c2b15bd0200d7f6525c9626e5a4422f2a051f5cce3922e67acfc11e5be36] <==
	I1013 23:12:33.525297       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 23:12:33.560535       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 23:12:33.560741       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1013 23:12:33.577554       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 23:12:33.578946       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90115e5c-bd97-4767-8033-5c05d9173e3c", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-670275_ecbd39a0-098a-4c8f-9a73-edf1c7b393d8 became leader
	I1013 23:12:33.579424       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670275_ecbd39a0-098a-4c8f-9a73-edf1c7b393d8!
	I1013 23:12:33.679587       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670275_ecbd39a0-098a-4c8f-9a73-edf1c7b393d8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-670275 -n old-k8s-version-670275
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-670275 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-670275 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-670275 --alsologtostderr -v=1: exit status 80 (1.949191125s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-670275 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 23:14:07.038648  612709 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:14:07.038815  612709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:14:07.038825  612709 out.go:374] Setting ErrFile to fd 2...
	I1013 23:14:07.038831  612709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:14:07.039194  612709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:14:07.039492  612709 out.go:368] Setting JSON to false
	I1013 23:14:07.039520  612709 mustload.go:65] Loading cluster: old-k8s-version-670275
	I1013 23:14:07.040645  612709 config.go:182] Loaded profile config "old-k8s-version-670275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 23:14:07.041170  612709 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:14:07.058746  612709 host.go:66] Checking if "old-k8s-version-670275" exists ...
	I1013 23:14:07.059072  612709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:14:07.116520  612709 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 23:14:07.107144202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:14:07.117236  612709 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-670275 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 23:14:07.122903  612709 out.go:179] * Pausing node old-k8s-version-670275 ... 
	I1013 23:14:07.125944  612709 host.go:66] Checking if "old-k8s-version-670275" exists ...
	I1013 23:14:07.126318  612709 ssh_runner.go:195] Run: systemctl --version
	I1013 23:14:07.126374  612709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:14:07.143416  612709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:14:07.245910  612709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:14:07.268867  612709 pause.go:52] kubelet running: true
	I1013 23:14:07.268947  612709 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:14:07.506471  612709 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:14:07.506575  612709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:14:07.576647  612709 cri.go:89] found id: "e6509a2c244fb78a077b629a507425ed00e44a5cf154bde09ba3a82adad1c173"
	I1013 23:14:07.576673  612709 cri.go:89] found id: "7a67a57eec433712b1a70f2b083b16db62ef0096a63d6df917fb42b8c3e00b88"
	I1013 23:14:07.576678  612709 cri.go:89] found id: "c7cc067a350042f099fb0283fd178fc3d2dfe4c66947450412f9a359cb5eb276"
	I1013 23:14:07.576682  612709 cri.go:89] found id: "5a7cd159eef62e90e54723908fed3e0842527fca41323ee49c8f86d31c4ae5cb"
	I1013 23:14:07.576685  612709 cri.go:89] found id: "3deee803917a6531252a677d172b7f5ab19bc3e562347cfaaaf7100fe8d271a7"
	I1013 23:14:07.576689  612709 cri.go:89] found id: "d6122cdac210528ee69d58eb80b1b66cd55cd8c1862a144d6114d13c9cb9392d"
	I1013 23:14:07.576692  612709 cri.go:89] found id: "fd8abca92b65e2224720afca413ecd65f3d828117b27b543bbf324c4b469d469"
	I1013 23:14:07.576696  612709 cri.go:89] found id: "f553bdbd313ae656a6206a93c14d248f53c01d7428f445c2ab944a92ca6dd4f4"
	I1013 23:14:07.576699  612709 cri.go:89] found id: "11f465f0a5f2ad80579ad95495a8d238edc5b96f89e351cf305a5ec396507d05"
	I1013 23:14:07.576705  612709 cri.go:89] found id: "2d3d6a750dbd29406ab4942e0eb47572d1a6ceb79100b58ad20c9fa44224e6e1"
	I1013 23:14:07.576708  612709 cri.go:89] found id: "8f768fe52ac6729d894b48a8d9e10b91b9f1cce278854b88aff76f4210f9da6d"
	I1013 23:14:07.576712  612709 cri.go:89] found id: ""
	I1013 23:14:07.576772  612709 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:14:07.599980  612709 retry.go:31] will retry after 308.066048ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:14:07Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:14:07.908452  612709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:14:07.922112  612709 pause.go:52] kubelet running: false
	I1013 23:14:07.922174  612709 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:14:08.103754  612709 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:14:08.103849  612709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:14:08.174768  612709 cri.go:89] found id: "e6509a2c244fb78a077b629a507425ed00e44a5cf154bde09ba3a82adad1c173"
	I1013 23:14:08.174791  612709 cri.go:89] found id: "7a67a57eec433712b1a70f2b083b16db62ef0096a63d6df917fb42b8c3e00b88"
	I1013 23:14:08.174797  612709 cri.go:89] found id: "c7cc067a350042f099fb0283fd178fc3d2dfe4c66947450412f9a359cb5eb276"
	I1013 23:14:08.174801  612709 cri.go:89] found id: "5a7cd159eef62e90e54723908fed3e0842527fca41323ee49c8f86d31c4ae5cb"
	I1013 23:14:08.174805  612709 cri.go:89] found id: "3deee803917a6531252a677d172b7f5ab19bc3e562347cfaaaf7100fe8d271a7"
	I1013 23:14:08.174809  612709 cri.go:89] found id: "d6122cdac210528ee69d58eb80b1b66cd55cd8c1862a144d6114d13c9cb9392d"
	I1013 23:14:08.174812  612709 cri.go:89] found id: "fd8abca92b65e2224720afca413ecd65f3d828117b27b543bbf324c4b469d469"
	I1013 23:14:08.174815  612709 cri.go:89] found id: "f553bdbd313ae656a6206a93c14d248f53c01d7428f445c2ab944a92ca6dd4f4"
	I1013 23:14:08.174840  612709 cri.go:89] found id: "11f465f0a5f2ad80579ad95495a8d238edc5b96f89e351cf305a5ec396507d05"
	I1013 23:14:08.174852  612709 cri.go:89] found id: "2d3d6a750dbd29406ab4942e0eb47572d1a6ceb79100b58ad20c9fa44224e6e1"
	I1013 23:14:08.174856  612709 cri.go:89] found id: "8f768fe52ac6729d894b48a8d9e10b91b9f1cce278854b88aff76f4210f9da6d"
	I1013 23:14:08.174859  612709 cri.go:89] found id: ""
	I1013 23:14:08.174934  612709 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:14:08.186938  612709 retry.go:31] will retry after 409.046318ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:14:08Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:14:08.596291  612709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:14:08.611820  612709 pause.go:52] kubelet running: false
	I1013 23:14:08.611949  612709 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:14:08.799153  612709 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:14:08.799277  612709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:14:08.889780  612709 cri.go:89] found id: "e6509a2c244fb78a077b629a507425ed00e44a5cf154bde09ba3a82adad1c173"
	I1013 23:14:08.889847  612709 cri.go:89] found id: "7a67a57eec433712b1a70f2b083b16db62ef0096a63d6df917fb42b8c3e00b88"
	I1013 23:14:08.889866  612709 cri.go:89] found id: "c7cc067a350042f099fb0283fd178fc3d2dfe4c66947450412f9a359cb5eb276"
	I1013 23:14:08.889887  612709 cri.go:89] found id: "5a7cd159eef62e90e54723908fed3e0842527fca41323ee49c8f86d31c4ae5cb"
	I1013 23:14:08.889905  612709 cri.go:89] found id: "3deee803917a6531252a677d172b7f5ab19bc3e562347cfaaaf7100fe8d271a7"
	I1013 23:14:08.889936  612709 cri.go:89] found id: "d6122cdac210528ee69d58eb80b1b66cd55cd8c1862a144d6114d13c9cb9392d"
	I1013 23:14:08.889959  612709 cri.go:89] found id: "fd8abca92b65e2224720afca413ecd65f3d828117b27b543bbf324c4b469d469"
	I1013 23:14:08.889978  612709 cri.go:89] found id: "f553bdbd313ae656a6206a93c14d248f53c01d7428f445c2ab944a92ca6dd4f4"
	I1013 23:14:08.889998  612709 cri.go:89] found id: "11f465f0a5f2ad80579ad95495a8d238edc5b96f89e351cf305a5ec396507d05"
	I1013 23:14:08.890020  612709 cri.go:89] found id: "2d3d6a750dbd29406ab4942e0eb47572d1a6ceb79100b58ad20c9fa44224e6e1"
	I1013 23:14:08.890045  612709 cri.go:89] found id: "8f768fe52ac6729d894b48a8d9e10b91b9f1cce278854b88aff76f4210f9da6d"
	I1013 23:14:08.890068  612709 cri.go:89] found id: ""
	I1013 23:14:08.890146  612709 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:14:08.905249  612709 out.go:203] 
	W1013 23:14:08.908273  612709 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:14:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:14:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 23:14:08.908297  612709 out.go:285] * 
	* 
	W1013 23:14:08.915460  612709 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 23:14:08.918269  612709 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-670275 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-670275
helpers_test.go:243: (dbg) docker inspect old-k8s-version-670275:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d",
	        "Created": "2025-10-13T23:11:32.172538967Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 610615,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:13:00.826117895Z",
	            "FinishedAt": "2025-10-13T23:12:59.730973821Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/hosts",
	        "LogPath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d-json.log",
	        "Name": "/old-k8s-version-670275",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-670275:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-670275",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d",
	                "LowerDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-670275",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-670275/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-670275",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-670275",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-670275",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cafceab1c111e9de613e835c2090b626f3905f27d361492317eac927aa7e1bcb",
	            "SandboxKey": "/var/run/docker/netns/cafceab1c111",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-670275": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:69:d5:63:dd:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6af44596a4f08c38fd2f582b08ef6f7af936522e458a8a952d1d21c07e6e39f9",
	                    "EndpointID": "345a24c85830f6bb3f226768986c34a02adfbd70812a642ee627fcc9fa49bb31",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-670275",
	                        "d5a910fa7ea2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670275 -n old-k8s-version-670275
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670275 -n old-k8s-version-670275: exit status 2 (362.951422ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-670275 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-670275 logs -n 25: (1.373598778s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-557095 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo containerd config dump                                                                                                                                                                                                  │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo crio config                                                                                                                                                                                                             │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ delete  │ -p cilium-557095                                                                                                                                                                                                                              │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p force-systemd-env-255188 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-255188  │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ delete  │ -p kubernetes-upgrade-211312                                                                                                                                                                                                                  │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ delete  │ -p force-systemd-env-255188                                                                                                                                                                                                                   │ force-systemd-env-255188  │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-896873    │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p cert-options-051941 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:11 UTC │
	│ ssh     │ cert-options-051941 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ ssh     │ -p cert-options-051941 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ delete  │ -p cert-options-051941                                                                                                                                                                                                                        │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:12 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │                     │
	│ stop    │ -p old-k8s-version-670275 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │ 13 Oct 25 23:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ image   │ old-k8s-version-670275 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ pause   │ -p old-k8s-version-670275 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:13:00
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:13:00.557380  610490 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:13:00.557498  610490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:13:00.557509  610490 out.go:374] Setting ErrFile to fd 2...
	I1013 23:13:00.557514  610490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:13:00.557774  610490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:13:00.558155  610490 out.go:368] Setting JSON to false
	I1013 23:13:00.559177  610490 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10517,"bootTime":1760386664,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:13:00.559259  610490 start.go:141] virtualization:  
	I1013 23:13:00.562572  610490 out.go:179] * [old-k8s-version-670275] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:13:00.566464  610490 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:13:00.566538  610490 notify.go:220] Checking for updates...
	I1013 23:13:00.571964  610490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:13:00.575499  610490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:13:00.578118  610490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:13:00.580729  610490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:13:00.583352  610490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:13:00.586551  610490 config.go:182] Loaded profile config "old-k8s-version-670275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 23:13:00.590044  610490 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1013 23:13:00.592821  610490 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:13:00.614227  610490 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:13:00.614359  610490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:13:00.672688  610490 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:13:00.662694266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:13:00.672797  610490 docker.go:318] overlay module found
	I1013 23:13:00.676277  610490 out.go:179] * Using the docker driver based on existing profile
	I1013 23:13:00.679101  610490 start.go:305] selected driver: docker
	I1013 23:13:00.679125  610490 start.go:925] validating driver "docker" against &{Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:13:00.679249  610490 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:13:00.680005  610490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:13:00.736640  610490 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:13:00.726582788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:13:00.736981  610490 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:13:00.737018  610490 cni.go:84] Creating CNI manager for ""
	I1013 23:13:00.737082  610490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:13:00.737129  610490 start.go:349] cluster config:
	{Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:13:00.740488  610490 out.go:179] * Starting "old-k8s-version-670275" primary control-plane node in "old-k8s-version-670275" cluster
	I1013 23:13:00.743243  610490 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:13:00.746094  610490 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:13:00.748977  610490 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 23:13:00.749041  610490 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1013 23:13:00.749057  610490 cache.go:58] Caching tarball of preloaded images
	I1013 23:13:00.749069  610490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:13:00.749159  610490 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:13:00.749169  610490 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1013 23:13:00.749286  610490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/config.json ...
	I1013 23:13:00.768739  610490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:13:00.768764  610490 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:13:00.768783  610490 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:13:00.768814  610490 start.go:360] acquireMachinesLock for old-k8s-version-670275: {Name:mk06171e4a123ca0a835c4c644ea27e36804aedc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:13:00.768887  610490 start.go:364] duration metric: took 48.901µs to acquireMachinesLock for "old-k8s-version-670275"
	I1013 23:13:00.768913  610490 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:13:00.768931  610490 fix.go:54] fixHost starting: 
	I1013 23:13:00.769209  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:00.789422  610490 fix.go:112] recreateIfNeeded on old-k8s-version-670275: state=Stopped err=<nil>
	W1013 23:13:00.789451  610490 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 23:13:00.792685  610490 out.go:252] * Restarting existing docker container for "old-k8s-version-670275" ...
	I1013 23:13:00.792790  610490 cli_runner.go:164] Run: docker start old-k8s-version-670275
	I1013 23:13:01.056912  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:01.081687  610490 kic.go:430] container "old-k8s-version-670275" state is running.
	I1013 23:13:01.082106  610490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670275
	I1013 23:13:01.104306  610490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/config.json ...
	I1013 23:13:01.104549  610490 machine.go:93] provisionDockerMachine start ...
	I1013 23:13:01.104639  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:01.129613  610490 main.go:141] libmachine: Using SSH client type: native
	I1013 23:13:01.130201  610490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1013 23:13:01.130218  610490 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:13:01.130996  610490 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 23:13:04.278580  610490 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-670275
	
	I1013 23:13:04.278608  610490 ubuntu.go:182] provisioning hostname "old-k8s-version-670275"
	I1013 23:13:04.278687  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:04.295968  610490 main.go:141] libmachine: Using SSH client type: native
	I1013 23:13:04.296282  610490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1013 23:13:04.296294  610490 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-670275 && echo "old-k8s-version-670275" | sudo tee /etc/hostname
	I1013 23:13:04.452295  610490 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-670275
	
	I1013 23:13:04.452376  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:04.471065  610490 main.go:141] libmachine: Using SSH client type: native
	I1013 23:13:04.471452  610490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1013 23:13:04.471480  610490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-670275' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-670275/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-670275' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:13:04.619375  610490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:13:04.619405  610490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:13:04.619442  610490 ubuntu.go:190] setting up certificates
	I1013 23:13:04.619451  610490 provision.go:84] configureAuth start
	I1013 23:13:04.619513  610490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670275
	I1013 23:13:04.636148  610490 provision.go:143] copyHostCerts
	I1013 23:13:04.636218  610490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:13:04.636241  610490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:13:04.636322  610490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:13:04.636429  610490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:13:04.636441  610490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:13:04.636473  610490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:13:04.636542  610490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:13:04.636553  610490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:13:04.636578  610490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:13:04.636648  610490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-670275 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-670275]
	I1013 23:13:05.043986  610490 provision.go:177] copyRemoteCerts
	I1013 23:13:05.044066  610490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:13:05.044106  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.062418  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:05.171376  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:13:05.191362  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1013 23:13:05.210080  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 23:13:05.229121  610490 provision.go:87] duration metric: took 609.652895ms to configureAuth
	I1013 23:13:05.229192  610490 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:13:05.229409  610490 config.go:182] Loaded profile config "old-k8s-version-670275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 23:13:05.229544  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.247439  610490 main.go:141] libmachine: Using SSH client type: native
	I1013 23:13:05.247762  610490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1013 23:13:05.247785  610490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:13:05.563027  610490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:13:05.563121  610490 machine.go:96] duration metric: took 4.458561378s to provisionDockerMachine
	I1013 23:13:05.563148  610490 start.go:293] postStartSetup for "old-k8s-version-670275" (driver="docker")
	I1013 23:13:05.563176  610490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:13:05.563279  610490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:13:05.563347  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.583876  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:05.688701  610490 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:13:05.693030  610490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:13:05.693061  610490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:13:05.693072  610490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:13:05.693127  610490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:13:05.693211  610490 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:13:05.693322  610490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:13:05.700957  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:13:05.718655  610490 start.go:296] duration metric: took 155.47451ms for postStartSetup
	I1013 23:13:05.718757  610490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:13:05.718805  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.735295  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:05.840308  610490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:13:05.845298  610490 fix.go:56] duration metric: took 5.076368294s for fixHost
	I1013 23:13:05.845321  610490 start.go:83] releasing machines lock for "old-k8s-version-670275", held for 5.076419887s
	I1013 23:13:05.845415  610490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670275
	I1013 23:13:05.863326  610490 ssh_runner.go:195] Run: cat /version.json
	I1013 23:13:05.863377  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.863411  610490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:13:05.863471  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.882029  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:05.884849  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:06.091138  610490 ssh_runner.go:195] Run: systemctl --version
	I1013 23:13:06.097748  610490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:13:06.135848  610490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:13:06.140223  610490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:13:06.140307  610490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:13:06.148398  610490 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:13:06.148426  610490 start.go:495] detecting cgroup driver to use...
	I1013 23:13:06.148459  610490 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:13:06.148507  610490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:13:06.164036  610490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:13:06.176847  610490 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:13:06.176905  610490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:13:06.192803  610490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:13:06.206515  610490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:13:06.325886  610490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:13:06.447973  610490 docker.go:234] disabling docker service ...
	I1013 23:13:06.448036  610490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:13:06.463061  610490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:13:06.476407  610490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:13:06.589594  610490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:13:06.713882  610490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:13:06.729193  610490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:13:06.745638  610490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1013 23:13:06.745753  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.754867  610490 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:13:06.754968  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.764322  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.773138  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.783683  610490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:13:06.792022  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.801543  610490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.810776  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.820571  610490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:13:06.828467  610490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:13:06.844394  610490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:13:06.955876  610490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:13:07.100453  610490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:13:07.100611  610490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:13:07.104393  610490 start.go:563] Will wait 60s for crictl version
	I1013 23:13:07.104496  610490 ssh_runner.go:195] Run: which crictl
	I1013 23:13:07.108408  610490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:13:07.134838  610490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:13:07.134985  610490 ssh_runner.go:195] Run: crio --version
	I1013 23:13:07.163910  610490 ssh_runner.go:195] Run: crio --version
	I1013 23:13:07.198798  610490 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1013 23:13:07.201683  610490 cli_runner.go:164] Run: docker network inspect old-k8s-version-670275 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:13:07.217891  610490 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 23:13:07.221861  610490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:13:07.232091  610490 kubeadm.go:883] updating cluster {Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:13:07.232219  610490 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 23:13:07.232276  610490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:13:07.265579  610490 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:13:07.265604  610490 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:13:07.265684  610490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:13:07.290509  610490 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:13:07.290541  610490 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:13:07.290550  610490 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1013 23:13:07.290643  610490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-670275 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:13:07.290736  610490 ssh_runner.go:195] Run: crio config
	I1013 23:13:07.358737  610490 cni.go:84] Creating CNI manager for ""
	I1013 23:13:07.358767  610490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:13:07.358794  610490 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:13:07.358822  610490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-670275 NodeName:old-k8s-version-670275 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:13:07.358959  610490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-670275"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:13:07.359033  610490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1013 23:13:07.366873  610490 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:13:07.366965  610490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:13:07.374609  610490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1013 23:13:07.387277  610490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:13:07.402725  610490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1013 23:13:07.415724  610490 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:13:07.419702  610490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:13:07.430318  610490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:13:07.544412  610490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:13:07.566316  610490 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275 for IP: 192.168.85.2
	I1013 23:13:07.566339  610490 certs.go:195] generating shared ca certs ...
	I1013 23:13:07.566358  610490 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:13:07.566504  610490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:13:07.566559  610490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:13:07.566572  610490 certs.go:257] generating profile certs ...
	I1013 23:13:07.566658  610490 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.key
	I1013 23:13:07.566730  610490 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.key.d7f6a84a
	I1013 23:13:07.566774  610490 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.key
	I1013 23:13:07.566894  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:13:07.566929  610490 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:13:07.566945  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:13:07.566970  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:13:07.566996  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:13:07.567030  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:13:07.567138  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:13:07.567770  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:13:07.588532  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:13:07.605581  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:13:07.622796  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:13:07.640661  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1013 23:13:07.661434  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 23:13:07.693458  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:13:07.715784  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 23:13:07.742428  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:13:07.768613  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:13:07.795848  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:13:07.817497  610490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:13:07.840626  610490 ssh_runner.go:195] Run: openssl version
	I1013 23:13:07.847190  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:13:07.856903  610490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:13:07.860865  610490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:13:07.860959  610490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:13:07.905027  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:13:07.913604  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:13:07.922820  610490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:13:07.926924  610490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:13:07.927043  610490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:13:07.968385  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:13:07.976997  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:13:07.986208  610490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:13:07.989986  610490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:13:07.990061  610490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:13:08.032849  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:13:08.041715  610490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:13:08.045833  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:13:08.088162  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:13:08.129905  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:13:08.171259  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:13:08.217154  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:13:08.288273  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:13:08.344471  610490 kubeadm.go:400] StartCluster: {Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:13:08.344613  610490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:13:08.344719  610490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:13:08.414375  610490 cri.go:89] found id: "d6122cdac210528ee69d58eb80b1b66cd55cd8c1862a144d6114d13c9cb9392d"
	I1013 23:13:08.414442  610490 cri.go:89] found id: "f553bdbd313ae656a6206a93c14d248f53c01d7428f445c2ab944a92ca6dd4f4"
	I1013 23:13:08.414461  610490 cri.go:89] found id: ""
	I1013 23:13:08.414562  610490 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 23:13:08.454696  610490 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:13:08Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:13:08.454833  610490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:13:08.473688  610490 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:13:08.473757  610490 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:13:08.473842  610490 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:13:08.485471  610490 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:13:08.486126  610490 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-670275" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:13:08.486450  610490 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-428797/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-670275" cluster setting kubeconfig missing "old-k8s-version-670275" context setting]
	I1013 23:13:08.486941  610490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:13:08.490006  610490 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:13:08.507777  610490 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 23:13:08.507858  610490 kubeadm.go:601] duration metric: took 34.076608ms to restartPrimaryControlPlane
	I1013 23:13:08.507883  610490 kubeadm.go:402] duration metric: took 163.423116ms to StartCluster
	I1013 23:13:08.507928  610490 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:13:08.508013  610490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:13:08.508934  610490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:13:08.509248  610490 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:13:08.509609  610490 config.go:182] Loaded profile config "old-k8s-version-670275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 23:13:08.509685  610490 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:13:08.509943  610490 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-670275"
	I1013 23:13:08.509993  610490 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-670275"
	W1013 23:13:08.510029  610490 addons.go:247] addon storage-provisioner should already be in state true
	I1013 23:13:08.510118  610490 host.go:66] Checking if "old-k8s-version-670275" exists ...
	I1013 23:13:08.510693  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:08.510907  610490 addons.go:69] Setting dashboard=true in profile "old-k8s-version-670275"
	I1013 23:13:08.510948  610490 addons.go:238] Setting addon dashboard=true in "old-k8s-version-670275"
	W1013 23:13:08.510969  610490 addons.go:247] addon dashboard should already be in state true
	I1013 23:13:08.511022  610490 host.go:66] Checking if "old-k8s-version-670275" exists ...
	I1013 23:13:08.511332  610490 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-670275"
	I1013 23:13:08.511361  610490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-670275"
	I1013 23:13:08.511556  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:08.511676  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:08.515154  610490 out.go:179] * Verifying Kubernetes components...
	I1013 23:13:08.521415  610490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:13:08.548895  610490 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 23:13:08.551934  610490 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 23:13:08.557070  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 23:13:08.557099  610490 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 23:13:08.557201  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:08.579980  610490 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-670275"
	W1013 23:13:08.580002  610490 addons.go:247] addon default-storageclass should already be in state true
	I1013 23:13:08.580027  610490 host.go:66] Checking if "old-k8s-version-670275" exists ...
	I1013 23:13:08.580430  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:08.584639  610490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:13:08.590121  610490 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:13:08.590166  610490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:13:08.590246  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:08.607439  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:08.641051  610490 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:13:08.641069  610490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:13:08.641131  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:08.643808  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:08.683635  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:08.853731  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 23:13:08.853807  610490 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 23:13:08.879677  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 23:13:08.879757  610490 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 23:13:08.895738  610490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:13:08.919465  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 23:13:08.919539  610490 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 23:13:08.941521  610490 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-670275" to be "Ready" ...
	I1013 23:13:08.964838  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 23:13:08.964857  610490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 23:13:08.984698  610490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:13:09.020279  610490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:13:09.029899  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 23:13:09.029974  610490 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 23:13:09.088668  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 23:13:09.088737  610490 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 23:13:09.122597  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 23:13:09.122664  610490 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 23:13:09.216046  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 23:13:09.216116  610490 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 23:13:09.299971  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:13:09.300049  610490 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 23:13:09.321818  610490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:13:13.179872  610490 node_ready.go:49] node "old-k8s-version-670275" is "Ready"
	I1013 23:13:13.179904  610490 node_ready.go:38] duration metric: took 4.238291801s for node "old-k8s-version-670275" to be "Ready" ...
	I1013 23:13:13.179919  610490 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:13:13.179981  610490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:13:14.345636  610490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.3609062s)
	I1013 23:13:14.828340  610490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.807966615s)
	I1013 23:13:15.524191  610490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.202289934s)
	I1013 23:13:15.524233  610490 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.344229976s)
	I1013 23:13:15.524258  610490 api_server.go:72] duration metric: took 7.014956088s to wait for apiserver process to appear ...
	I1013 23:13:15.524323  610490 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:13:15.524341  610490 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:13:15.527353  610490 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-670275 addons enable metrics-server
	
	I1013 23:13:15.530427  610490 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1013 23:13:15.533457  610490 addons.go:514] duration metric: took 7.023759829s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1013 23:13:15.534271  610490 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 23:13:15.535913  610490 api_server.go:141] control plane version: v1.28.0
	I1013 23:13:15.535942  610490 api_server.go:131] duration metric: took 11.612135ms to wait for apiserver health ...
	I1013 23:13:15.535951  610490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:13:15.539639  610490 system_pods.go:59] 8 kube-system pods found
	I1013 23:13:15.539678  610490 system_pods.go:61] "coredns-5dd5756b68-9jcbh" [d7fa11f6-6bdd-48d6-b326-81f138997784] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:13:15.539688  610490 system_pods.go:61] "etcd-old-k8s-version-670275" [44443f75-b1be-438c-aee5-66c09080a824] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:13:15.539694  610490 system_pods.go:61] "kindnet-c6xtc" [21a63e23-ce36-4981-bad3-f1386b824908] Running
	I1013 23:13:15.539701  610490 system_pods.go:61] "kube-apiserver-old-k8s-version-670275" [aa8be998-0f28-48dd-b963-816385645b33] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:13:15.539709  610490 system_pods.go:61] "kube-controller-manager-old-k8s-version-670275" [2e068ad0-7a95-459c-af4e-15d9bf83c071] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:13:15.539715  610490 system_pods.go:61] "kube-proxy-2ph29" [95e536a5-7221-4e6f-9c1f-64f77071018a] Running
	I1013 23:13:15.539732  610490 system_pods.go:61] "kube-scheduler-old-k8s-version-670275" [d2014170-66a8-448a-b8ba-0cb3bb12c612] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:13:15.539744  610490 system_pods.go:61] "storage-provisioner" [bf19903c-00c6-4ccc-b9fd-6b0a36356658] Running
	I1013 23:13:15.539750  610490 system_pods.go:74] duration metric: took 3.794392ms to wait for pod list to return data ...
	I1013 23:13:15.539758  610490 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:13:15.542025  610490 default_sa.go:45] found service account: "default"
	I1013 23:13:15.542049  610490 default_sa.go:55] duration metric: took 2.276868ms for default service account to be created ...
	I1013 23:13:15.542058  610490 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:13:15.545464  610490 system_pods.go:86] 8 kube-system pods found
	I1013 23:13:15.545510  610490 system_pods.go:89] "coredns-5dd5756b68-9jcbh" [d7fa11f6-6bdd-48d6-b326-81f138997784] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:13:15.545520  610490 system_pods.go:89] "etcd-old-k8s-version-670275" [44443f75-b1be-438c-aee5-66c09080a824] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:13:15.545527  610490 system_pods.go:89] "kindnet-c6xtc" [21a63e23-ce36-4981-bad3-f1386b824908] Running
	I1013 23:13:15.545535  610490 system_pods.go:89] "kube-apiserver-old-k8s-version-670275" [aa8be998-0f28-48dd-b963-816385645b33] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:13:15.545545  610490 system_pods.go:89] "kube-controller-manager-old-k8s-version-670275" [2e068ad0-7a95-459c-af4e-15d9bf83c071] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:13:15.545553  610490 system_pods.go:89] "kube-proxy-2ph29" [95e536a5-7221-4e6f-9c1f-64f77071018a] Running
	I1013 23:13:15.545559  610490 system_pods.go:89] "kube-scheduler-old-k8s-version-670275" [d2014170-66a8-448a-b8ba-0cb3bb12c612] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:13:15.545566  610490 system_pods.go:89] "storage-provisioner" [bf19903c-00c6-4ccc-b9fd-6b0a36356658] Running
	I1013 23:13:15.545574  610490 system_pods.go:126] duration metric: took 3.510089ms to wait for k8s-apps to be running ...
	I1013 23:13:15.545584  610490 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:13:15.545657  610490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:13:15.559529  610490 system_svc.go:56] duration metric: took 13.935033ms WaitForService to wait for kubelet
	I1013 23:13:15.559561  610490 kubeadm.go:586] duration metric: took 7.050257065s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:13:15.559586  610490 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:13:15.562913  610490 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:13:15.562952  610490 node_conditions.go:123] node cpu capacity is 2
	I1013 23:13:15.562965  610490 node_conditions.go:105] duration metric: took 3.372287ms to run NodePressure ...
	I1013 23:13:15.562976  610490 start.go:241] waiting for startup goroutines ...
	I1013 23:13:15.562984  610490 start.go:246] waiting for cluster config update ...
	I1013 23:13:15.562995  610490 start.go:255] writing updated cluster config ...
	I1013 23:13:15.563339  610490 ssh_runner.go:195] Run: rm -f paused
	I1013 23:13:15.567408  610490 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:13:15.572084  610490 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-9jcbh" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 23:13:17.579711  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:20.078394  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:22.078552  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:24.579559  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:27.077751  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:29.077987  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:31.079794  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:33.580013  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:36.078718  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:38.078921  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:40.079015  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:42.100155  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:44.588639  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:47.079065  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:49.578271  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	I1013 23:13:50.578463  610490 pod_ready.go:94] pod "coredns-5dd5756b68-9jcbh" is "Ready"
	I1013 23:13:50.578495  610490 pod_ready.go:86] duration metric: took 35.006383806s for pod "coredns-5dd5756b68-9jcbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.582580  610490 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.590536  610490 pod_ready.go:94] pod "etcd-old-k8s-version-670275" is "Ready"
	I1013 23:13:50.590567  610490 pod_ready.go:86] duration metric: took 7.954823ms for pod "etcd-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.593784  610490 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.602137  610490 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-670275" is "Ready"
	I1013 23:13:50.602166  610490 pod_ready.go:86] duration metric: took 8.358131ms for pod "kube-apiserver-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.605845  610490 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.776126  610490 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-670275" is "Ready"
	I1013 23:13:50.776158  610490 pod_ready.go:86] duration metric: took 170.284777ms for pod "kube-controller-manager-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.977140  610490 pod_ready.go:83] waiting for pod "kube-proxy-2ph29" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:51.375267  610490 pod_ready.go:94] pod "kube-proxy-2ph29" is "Ready"
	I1013 23:13:51.375313  610490 pod_ready.go:86] duration metric: took 398.14861ms for pod "kube-proxy-2ph29" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:51.576183  610490 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:51.976923  610490 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-670275" is "Ready"
	I1013 23:13:51.976949  610490 pod_ready.go:86] duration metric: took 400.741919ms for pod "kube-scheduler-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:51.976971  610490 pod_ready.go:40] duration metric: took 36.409529307s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:13:52.055472  610490 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1013 23:13:52.062183  610490 out.go:203] 
	W1013 23:13:52.067059  610490 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1013 23:13:52.071713  610490 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1013 23:13:52.076040  610490 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-670275" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.928342585Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf" id=cb9aeaee-b4cc-436a-bf2a-b9d85ff17653 name=/runtime.v1.ImageService/PullImage
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.930190055Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=7e856331-2002-48b2-b6a7-300274a35550 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.932764296Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp/kubernetes-dashboard" id=5b36e378-d39d-4f5c-96d2-7050a0c0df41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.933837291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.939255921Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.939508543Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f0184549755fc5de37a88d6e76383f5797b677a90aaf5b4a768c439a881f6030/merged/etc/group: no such file or directory"
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.939929679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.957909504Z" level=info msg="Created container 2d3d6a750dbd29406ab4942e0eb47572d1a6ceb79100b58ad20c9fa44224e6e1: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp/kubernetes-dashboard" id=5b36e378-d39d-4f5c-96d2-7050a0c0df41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.96207257Z" level=info msg="Starting container: 2d3d6a750dbd29406ab4942e0eb47572d1a6ceb79100b58ad20c9fa44224e6e1" id=d50e996d-c3d4-43fa-a16b-2178cb5d3b8a name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.96989179Z" level=info msg="Started container" PID=1658 containerID=2d3d6a750dbd29406ab4942e0eb47572d1a6ceb79100b58ad20c9fa44224e6e1 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp/kubernetes-dashboard id=d50e996d-c3d4-43fa-a16b-2178cb5d3b8a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2781dfc1213bb0bb2d07dd9b1dec9e4a906ac564ea2121a65d96b924a0d1b5ce
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.64300839Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.65068811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.650723227Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.650746275Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.65479562Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.654959087Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.654992408Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.658196363Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.658228994Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.658254979Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.661693637Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.661757545Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.661780125Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.667997253Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.668593557Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2d3d6a750dbd2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   16 seconds ago       Running             kubernetes-dashboard        0                   2781dfc1213bb       kubernetes-dashboard-8694d4445c-gg5tp            kubernetes-dashboard
	8f768fe52ac67       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   3df08d3a008fa       dashboard-metrics-scraper-5f989dc9cf-9ldqs       kubernetes-dashboard
	e6509a2c244fb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   e33679326303b       storage-provisioner                              kube-system
	7a67a57eec433       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   6aaa38987d805       coredns-5dd5756b68-9jcbh                         kube-system
	dc720d599612b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   5d48ffc5adac1       busybox                                          default
	c7cc067a35004       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   cfb91dd263d16       kube-proxy-2ph29                                 kube-system
	5a7cd159eef62       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   c64dd1920f904       kindnet-c6xtc                                    kube-system
	3deee803917a6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   e33679326303b       storage-provisioner                              kube-system
	d6122cdac2105       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   1997e020ecd4b       kube-apiserver-old-k8s-version-670275            kube-system
	fd8abca92b65e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   13a0f75c0dd41       etcd-old-k8s-version-670275                      kube-system
	f553bdbd313ae       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   6fccb6fc01afc       kube-controller-manager-old-k8s-version-670275   kube-system
	11f465f0a5f2a       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   0566d55a5dd60       kube-scheduler-old-k8s-version-670275            kube-system
	
	
	==> coredns [7a67a57eec433712b1a70f2b083b16db62ef0096a63d6df917fb42b8c3e00b88] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59411 - 61115 "HINFO IN 5199209719392204441.1153077157160991146. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029488494s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-670275
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-670275
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=old-k8s-version-670275
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_11_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:11:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-670275
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:14:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:13:43 +0000   Mon, 13 Oct 2025 23:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:13:43 +0000   Mon, 13 Oct 2025 23:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:13:43 +0000   Mon, 13 Oct 2025 23:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:13:43 +0000   Mon, 13 Oct 2025 23:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-670275
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                3228e95e-3de7-463c-a3f6-be9dbc04be1a
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-9jcbh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     119s
	  kube-system                 etcd-old-k8s-version-670275                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m11s
	  kube-system                 kindnet-c6xtc                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      119s
	  kube-system                 kube-apiserver-old-k8s-version-670275             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-controller-manager-old-k8s-version-670275    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-2ph29                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-old-k8s-version-670275             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-9ldqs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-gg5tp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 117s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m12s              kubelet          Node old-k8s-version-670275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s              kubelet          Node old-k8s-version-670275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s              kubelet          Node old-k8s-version-670275 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m12s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           119s               node-controller  Node old-k8s-version-670275 event: Registered Node old-k8s-version-670275 in Controller
	  Normal  NodeReady                97s                kubelet          Node old-k8s-version-670275 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node old-k8s-version-670275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node old-k8s-version-670275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node old-k8s-version-670275 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-670275 event: Registered Node old-k8s-version-670275 in Controller
	
	
	==> dmesg <==
	[Oct13 22:45] overlayfs: idmapped layers are currently not supported
	[Oct13 22:50] overlayfs: idmapped layers are currently not supported
	[Oct13 22:51] overlayfs: idmapped layers are currently not supported
	[Oct13 22:52] overlayfs: idmapped layers are currently not supported
	[Oct13 22:53] overlayfs: idmapped layers are currently not supported
	[Oct13 22:54] overlayfs: idmapped layers are currently not supported
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fd8abca92b65e2224720afca413ecd65f3d828117b27b543bbf324c4b469d469] <==
	{"level":"info","ts":"2025-10-13T23:13:09.027564Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T23:13:09.027572Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T23:13:09.027763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-13T23:13:09.027819Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-13T23:13:09.027885Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T23:13:09.027911Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T23:13:09.052751Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-13T23:13:09.053143Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-13T23:13:09.053168Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-13T23:13:09.053385Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T23:13:09.053396Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T23:13:10.62057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-13T23:13:10.620756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-13T23:13:10.620814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-13T23:13:10.620861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-13T23:13:10.620894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-13T23:13:10.620931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-13T23:13:10.620965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-13T23:13:10.623322Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-670275 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T23:13:10.623506Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T23:13:10.62464Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-13T23:13:10.627475Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T23:13:10.627701Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T23:13:10.628767Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-13T23:13:10.633696Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:14:10 up  2:56,  0 user,  load average: 1.27, 2.53, 2.32
	Linux old-k8s-version-670275 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5a7cd159eef62e90e54723908fed3e0842527fca41323ee49c8f86d31c4ae5cb] <==
	I1013 23:13:14.486333       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:13:14.486552       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:13:14.486669       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:13:14.486680       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:13:14.486692       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:13:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:13:14.635328       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:13:14.635352       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:13:14.635361       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:13:14.635805       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:13:44.634825       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:13:44.635781       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 23:13:44.635859       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 23:13:44.637034       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1013 23:13:46.136436       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:13:46.136467       1 metrics.go:72] Registering metrics
	I1013 23:13:46.136545       1 controller.go:711] "Syncing nftables rules"
	I1013 23:13:54.642714       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:13:54.642775       1 main.go:301] handling current node
	I1013 23:14:04.638337       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:14:04.638368       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d6122cdac210528ee69d58eb80b1b66cd55cd8c1862a144d6114d13c9cb9392d] <==
	I1013 23:13:13.166572       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1013 23:13:13.167018       1 aggregator.go:166] initial CRD sync complete...
	I1013 23:13:13.167038       1 autoregister_controller.go:141] Starting autoregister controller
	I1013 23:13:13.167046       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 23:13:13.218877       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 23:13:13.257160       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1013 23:13:13.258251       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:13:13.270433       1 cache.go:39] Caches are synced for autoregister controller
	I1013 23:13:13.282372       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1013 23:13:13.283181       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1013 23:13:13.283236       1 shared_informer.go:318] Caches are synced for configmaps
	I1013 23:13:13.283634       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1013 23:13:13.283650       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1013 23:13:13.329193       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:13:14.025780       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:13:15.349743       1 controller.go:624] quota admission added evaluator for: namespaces
	I1013 23:13:15.396386       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1013 23:13:15.422897       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:13:15.433101       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:13:15.445242       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1013 23:13:15.496966       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.117.95"}
	I1013 23:13:15.517069       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.143.180"}
	I1013 23:13:25.997324       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1013 23:13:26.083890       1 controller.go:624] quota admission added evaluator for: endpoints
	I1013 23:13:26.110840       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f553bdbd313ae656a6206a93c14d248f53c01d7428f445c2ab944a92ca6dd4f4] <==
	I1013 23:13:25.792511       1 shared_informer.go:318] Caches are synced for resource quota
	I1013 23:13:26.002059       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1013 23:13:26.009961       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1013 23:13:26.034748       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-9ldqs"
	I1013 23:13:26.034877       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-gg5tp"
	I1013 23:13:26.047211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="38.294541ms"
	I1013 23:13:26.065487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.715993ms"
	I1013 23:13:26.089861       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.538556ms"
	I1013 23:13:26.090052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.386µs"
	I1013 23:13:26.121731       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1013 23:13:26.130112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.498557ms"
	I1013 23:13:26.130270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.021µs"
	I1013 23:13:26.130527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.521µs"
	I1013 23:13:26.134896       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 23:13:26.139491       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 23:13:26.139590       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1013 23:13:32.923373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.083µs"
	I1013 23:13:33.933143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.354µs"
	I1013 23:13:37.276334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="195.073µs"
	I1013 23:13:49.984035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.725µs"
	I1013 23:13:50.510919       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.60604ms"
	I1013 23:13:50.511205       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.315µs"
	I1013 23:13:55.010400       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.087092ms"
	I1013 23:13:55.011133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.106µs"
	I1013 23:13:57.279333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="252.737µs"
	
	
	==> kube-proxy [c7cc067a350042f099fb0283fd178fc3d2dfe4c66947450412f9a359cb5eb276] <==
	I1013 23:13:14.732524       1 server_others.go:69] "Using iptables proxy"
	I1013 23:13:14.771649       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1013 23:13:14.881140       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:13:14.883390       1 server_others.go:152] "Using iptables Proxier"
	I1013 23:13:14.883428       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1013 23:13:14.883436       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1013 23:13:14.883460       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1013 23:13:14.883697       1 server.go:846] "Version info" version="v1.28.0"
	I1013 23:13:14.883707       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:13:14.886092       1 config.go:188] "Starting service config controller"
	I1013 23:13:14.886123       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1013 23:13:14.886141       1 config.go:97] "Starting endpoint slice config controller"
	I1013 23:13:14.886146       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1013 23:13:14.886716       1 config.go:315] "Starting node config controller"
	I1013 23:13:14.886734       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1013 23:13:14.986477       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1013 23:13:14.986540       1 shared_informer.go:318] Caches are synced for service config
	I1013 23:13:14.986819       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [11f465f0a5f2ad80579ad95495a8d238edc5b96f89e351cf305a5ec396507d05] <==
	I1013 23:13:11.192336       1 serving.go:348] Generated self-signed cert in-memory
	I1013 23:13:13.546556       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1013 23:13:13.546649       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:13:13.555325       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1013 23:13:13.555479       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1013 23:13:13.555518       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1013 23:13:13.555559       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1013 23:13:13.557192       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:13:13.563307       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1013 23:13:13.563432       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:13:13.563478       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1013 23:13:13.655890       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1013 23:13:13.664538       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1013 23:13:13.664626       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 13 23:13:26 old-k8s-version-670275 kubelet[775]: E1013 23:13:26.073522     775 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-670275" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-670275' and this object
	Oct 13 23:13:26 old-k8s-version-670275 kubelet[775]: I1013 23:13:26.193557     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/86728516-6908-4c5c-91e7-e39eb9a82389-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-gg5tp\" (UID: \"86728516-6908-4c5c-91e7-e39eb9a82389\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp"
	Oct 13 23:13:26 old-k8s-version-670275 kubelet[775]: I1013 23:13:26.193678     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg58s\" (UniqueName: \"kubernetes.io/projected/86728516-6908-4c5c-91e7-e39eb9a82389-kube-api-access-vg58s\") pod \"kubernetes-dashboard-8694d4445c-gg5tp\" (UID: \"86728516-6908-4c5c-91e7-e39eb9a82389\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp"
	Oct 13 23:13:26 old-k8s-version-670275 kubelet[775]: I1013 23:13:26.193709     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/472e5777-cda5-42db-bcc5-c6cf24d06bce-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-9ldqs\" (UID: \"472e5777-cda5-42db-bcc5-c6cf24d06bce\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs"
	Oct 13 23:13:26 old-k8s-version-670275 kubelet[775]: I1013 23:13:26.193787     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gllz\" (UniqueName: \"kubernetes.io/projected/472e5777-cda5-42db-bcc5-c6cf24d06bce-kube-api-access-2gllz\") pod \"dashboard-metrics-scraper-5f989dc9cf-9ldqs\" (UID: \"472e5777-cda5-42db-bcc5-c6cf24d06bce\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs"
	Oct 13 23:13:27 old-k8s-version-670275 kubelet[775]: W1013 23:13:27.291405     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/crio-3df08d3a008fae5dee90ac2ab92b0efd18d8efc029849ef288252d5600f7bdc3 WatchSource:0}: Error finding container 3df08d3a008fae5dee90ac2ab92b0efd18d8efc029849ef288252d5600f7bdc3: Status 404 returned error can't find the container with id 3df08d3a008fae5dee90ac2ab92b0efd18d8efc029849ef288252d5600f7bdc3
	Oct 13 23:13:27 old-k8s-version-670275 kubelet[775]: W1013 23:13:27.297954     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/crio-2781dfc1213bb0bb2d07dd9b1dec9e4a906ac564ea2121a65d96b924a0d1b5ce WatchSource:0}: Error finding container 2781dfc1213bb0bb2d07dd9b1dec9e4a906ac564ea2121a65d96b924a0d1b5ce: Status 404 returned error can't find the container with id 2781dfc1213bb0bb2d07dd9b1dec9e4a906ac564ea2121a65d96b924a0d1b5ce
	Oct 13 23:13:32 old-k8s-version-670275 kubelet[775]: I1013 23:13:32.905809     775 scope.go:117] "RemoveContainer" containerID="aa271ed8151db7e41be5e365bf08753541ea0964eaac4540a2661d5764c8a31d"
	Oct 13 23:13:33 old-k8s-version-670275 kubelet[775]: I1013 23:13:33.910148     775 scope.go:117] "RemoveContainer" containerID="aa271ed8151db7e41be5e365bf08753541ea0964eaac4540a2661d5764c8a31d"
	Oct 13 23:13:33 old-k8s-version-670275 kubelet[775]: I1013 23:13:33.910848     775 scope.go:117] "RemoveContainer" containerID="b3d6efecc93c98b4236d434d1b173493b0f0208c47242e350dfb2ccf3548f66a"
	Oct 13 23:13:33 old-k8s-version-670275 kubelet[775]: E1013 23:13:33.911284     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9ldqs_kubernetes-dashboard(472e5777-cda5-42db-bcc5-c6cf24d06bce)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs" podUID="472e5777-cda5-42db-bcc5-c6cf24d06bce"
	Oct 13 23:13:37 old-k8s-version-670275 kubelet[775]: I1013 23:13:37.262337     775 scope.go:117] "RemoveContainer" containerID="b3d6efecc93c98b4236d434d1b173493b0f0208c47242e350dfb2ccf3548f66a"
	Oct 13 23:13:37 old-k8s-version-670275 kubelet[775]: E1013 23:13:37.262696     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9ldqs_kubernetes-dashboard(472e5777-cda5-42db-bcc5-c6cf24d06bce)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs" podUID="472e5777-cda5-42db-bcc5-c6cf24d06bce"
	Oct 13 23:13:44 old-k8s-version-670275 kubelet[775]: I1013 23:13:44.940487     775 scope.go:117] "RemoveContainer" containerID="3deee803917a6531252a677d172b7f5ab19bc3e562347cfaaaf7100fe8d271a7"
	Oct 13 23:13:49 old-k8s-version-670275 kubelet[775]: I1013 23:13:49.772134     775 scope.go:117] "RemoveContainer" containerID="b3d6efecc93c98b4236d434d1b173493b0f0208c47242e350dfb2ccf3548f66a"
	Oct 13 23:13:49 old-k8s-version-670275 kubelet[775]: I1013 23:13:49.955492     775 scope.go:117] "RemoveContainer" containerID="b3d6efecc93c98b4236d434d1b173493b0f0208c47242e350dfb2ccf3548f66a"
	Oct 13 23:13:49 old-k8s-version-670275 kubelet[775]: I1013 23:13:49.955767     775 scope.go:117] "RemoveContainer" containerID="8f768fe52ac6729d894b48a8d9e10b91b9f1cce278854b88aff76f4210f9da6d"
	Oct 13 23:13:49 old-k8s-version-670275 kubelet[775]: E1013 23:13:49.956098     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9ldqs_kubernetes-dashboard(472e5777-cda5-42db-bcc5-c6cf24d06bce)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs" podUID="472e5777-cda5-42db-bcc5-c6cf24d06bce"
	Oct 13 23:13:57 old-k8s-version-670275 kubelet[775]: I1013 23:13:57.262719     775 scope.go:117] "RemoveContainer" containerID="8f768fe52ac6729d894b48a8d9e10b91b9f1cce278854b88aff76f4210f9da6d"
	Oct 13 23:13:57 old-k8s-version-670275 kubelet[775]: E1013 23:13:57.263564     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9ldqs_kubernetes-dashboard(472e5777-cda5-42db-bcc5-c6cf24d06bce)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs" podUID="472e5777-cda5-42db-bcc5-c6cf24d06bce"
	Oct 13 23:13:57 old-k8s-version-670275 kubelet[775]: I1013 23:13:57.280040     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp" podStartSLOduration=4.654652587 podCreationTimestamp="2025-10-13 23:13:26 +0000 UTC" firstStartedPulling="2025-10-13 23:13:27.303319297 +0000 UTC m=+19.739571196" lastFinishedPulling="2025-10-13 23:13:53.928647194 +0000 UTC m=+46.364899093" observedRunningTime="2025-10-13 23:13:55.001073204 +0000 UTC m=+47.437325111" watchObservedRunningTime="2025-10-13 23:13:57.279980484 +0000 UTC m=+49.716232383"
	Oct 13 23:14:07 old-k8s-version-670275 kubelet[775]: I1013 23:14:07.458500     775 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 13 23:14:07 old-k8s-version-670275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:14:07 old-k8s-version-670275 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:14:07 old-k8s-version-670275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2d3d6a750dbd29406ab4942e0eb47572d1a6ceb79100b58ad20c9fa44224e6e1] <==
	2025/10/13 23:13:53 Using namespace: kubernetes-dashboard
	2025/10/13 23:13:53 Using in-cluster config to connect to apiserver
	2025/10/13 23:13:53 Using secret token for csrf signing
	2025/10/13 23:13:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 23:13:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 23:13:54 Successful initial request to the apiserver, version: v1.28.0
	2025/10/13 23:13:54 Generating JWE encryption key
	2025/10/13 23:13:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 23:13:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 23:13:54 Initializing JWE encryption key from synchronized object
	2025/10/13 23:13:54 Creating in-cluster Sidecar client
	2025/10/13 23:13:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 23:13:54 Serving insecurely on HTTP port: 9090
	2025/10/13 23:13:53 Starting overwatch
	
	
	==> storage-provisioner [3deee803917a6531252a677d172b7f5ab19bc3e562347cfaaaf7100fe8d271a7] <==
	I1013 23:13:14.591299       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 23:13:44.608612       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e6509a2c244fb78a077b629a507425ed00e44a5cf154bde09ba3a82adad1c173] <==
	I1013 23:13:45.023264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 23:13:45.037482       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 23:13:45.037528       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1013 23:14:02.441543       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 23:14:02.441732       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670275_a80390c9-deea-4309-b8cc-63f5b7afd1ad!
	I1013 23:14:02.442233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90115e5c-bd97-4767-8033-5c05d9173e3c", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-670275_a80390c9-deea-4309-b8cc-63f5b7afd1ad became leader
	I1013 23:14:02.542488       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670275_a80390c9-deea-4309-b8cc-63f5b7afd1ad!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-670275 -n old-k8s-version-670275
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-670275 -n old-k8s-version-670275: exit status 2 (386.974867ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-670275 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-670275
helpers_test.go:243: (dbg) docker inspect old-k8s-version-670275:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d",
	        "Created": "2025-10-13T23:11:32.172538967Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 610615,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:13:00.826117895Z",
	            "FinishedAt": "2025-10-13T23:12:59.730973821Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/hosts",
	        "LogPath": "/var/lib/docker/containers/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d-json.log",
	        "Name": "/old-k8s-version-670275",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-670275:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-670275",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d",
	                "LowerDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bcb3405617e19c8d0aaa42b3c032f1114272e8844b1662e6d585793772ed4acc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-670275",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-670275/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-670275",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-670275",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-670275",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cafceab1c111e9de613e835c2090b626f3905f27d361492317eac927aa7e1bcb",
	            "SandboxKey": "/var/run/docker/netns/cafceab1c111",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-670275": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:69:d5:63:dd:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6af44596a4f08c38fd2f582b08ef6f7af936522e458a8a952d1d21c07e6e39f9",
	                    "EndpointID": "345a24c85830f6bb3f226768986c34a02adfbd70812a642ee627fcc9fa49bb31",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-670275",
	                        "d5a910fa7ea2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670275 -n old-k8s-version-670275
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670275 -n old-k8s-version-670275: exit status 2 (368.76532ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-670275 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-670275 logs -n 25: (1.345885825s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-557095 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo containerd config dump                                                                                                                                                                                                  │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ ssh     │ -p cilium-557095 sudo crio config                                                                                                                                                                                                             │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ delete  │ -p cilium-557095                                                                                                                                                                                                                              │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p force-systemd-env-255188 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-255188  │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ delete  │ -p kubernetes-upgrade-211312                                                                                                                                                                                                                  │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ delete  │ -p force-systemd-env-255188                                                                                                                                                                                                                   │ force-systemd-env-255188  │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-896873    │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p cert-options-051941 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:11 UTC │
	│ ssh     │ cert-options-051941 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ ssh     │ -p cert-options-051941 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ delete  │ -p cert-options-051941                                                                                                                                                                                                                        │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:12 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │                     │
	│ stop    │ -p old-k8s-version-670275 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │ 13 Oct 25 23:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ image   │ old-k8s-version-670275 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ pause   │ -p old-k8s-version-670275 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:13:00
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:13:00.557380  610490 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:13:00.557498  610490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:13:00.557509  610490 out.go:374] Setting ErrFile to fd 2...
	I1013 23:13:00.557514  610490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:13:00.557774  610490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:13:00.558155  610490 out.go:368] Setting JSON to false
	I1013 23:13:00.559177  610490 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10517,"bootTime":1760386664,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:13:00.559259  610490 start.go:141] virtualization:  
	I1013 23:13:00.562572  610490 out.go:179] * [old-k8s-version-670275] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:13:00.566464  610490 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:13:00.566538  610490 notify.go:220] Checking for updates...
	I1013 23:13:00.571964  610490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:13:00.575499  610490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:13:00.578118  610490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:13:00.580729  610490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:13:00.583352  610490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:13:00.586551  610490 config.go:182] Loaded profile config "old-k8s-version-670275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 23:13:00.590044  610490 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1013 23:13:00.592821  610490 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:13:00.614227  610490 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:13:00.614359  610490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:13:00.672688  610490 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:13:00.662694266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:13:00.672797  610490 docker.go:318] overlay module found
	I1013 23:13:00.676277  610490 out.go:179] * Using the docker driver based on existing profile
	I1013 23:13:00.679101  610490 start.go:305] selected driver: docker
	I1013 23:13:00.679125  610490 start.go:925] validating driver "docker" against &{Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:13:00.679249  610490 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:13:00.680005  610490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:13:00.736640  610490 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:13:00.726582788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:13:00.736981  610490 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:13:00.737018  610490 cni.go:84] Creating CNI manager for ""
	I1013 23:13:00.737082  610490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:13:00.737129  610490 start.go:349] cluster config:
	{Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:13:00.740488  610490 out.go:179] * Starting "old-k8s-version-670275" primary control-plane node in "old-k8s-version-670275" cluster
	I1013 23:13:00.743243  610490 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:13:00.746094  610490 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:13:00.748977  610490 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 23:13:00.749041  610490 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1013 23:13:00.749057  610490 cache.go:58] Caching tarball of preloaded images
	I1013 23:13:00.749069  610490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:13:00.749159  610490 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:13:00.749169  610490 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1013 23:13:00.749286  610490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/config.json ...
	I1013 23:13:00.768739  610490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:13:00.768764  610490 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:13:00.768783  610490 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:13:00.768814  610490 start.go:360] acquireMachinesLock for old-k8s-version-670275: {Name:mk06171e4a123ca0a835c4c644ea27e36804aedc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:13:00.768887  610490 start.go:364] duration metric: took 48.901µs to acquireMachinesLock for "old-k8s-version-670275"
	I1013 23:13:00.768913  610490 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:13:00.768931  610490 fix.go:54] fixHost starting: 
	I1013 23:13:00.769209  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:00.789422  610490 fix.go:112] recreateIfNeeded on old-k8s-version-670275: state=Stopped err=<nil>
	W1013 23:13:00.789451  610490 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 23:13:00.792685  610490 out.go:252] * Restarting existing docker container for "old-k8s-version-670275" ...
	I1013 23:13:00.792790  610490 cli_runner.go:164] Run: docker start old-k8s-version-670275
	I1013 23:13:01.056912  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:01.081687  610490 kic.go:430] container "old-k8s-version-670275" state is running.
	I1013 23:13:01.082106  610490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670275
	I1013 23:13:01.104306  610490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/config.json ...
	I1013 23:13:01.104549  610490 machine.go:93] provisionDockerMachine start ...
	I1013 23:13:01.104639  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:01.129613  610490 main.go:141] libmachine: Using SSH client type: native
	I1013 23:13:01.130201  610490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1013 23:13:01.130218  610490 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:13:01.130996  610490 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 23:13:04.278580  610490 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-670275
	
	I1013 23:13:04.278608  610490 ubuntu.go:182] provisioning hostname "old-k8s-version-670275"
	I1013 23:13:04.278687  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:04.295968  610490 main.go:141] libmachine: Using SSH client type: native
	I1013 23:13:04.296282  610490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1013 23:13:04.296294  610490 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-670275 && echo "old-k8s-version-670275" | sudo tee /etc/hostname
	I1013 23:13:04.452295  610490 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-670275
	
	I1013 23:13:04.452376  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:04.471065  610490 main.go:141] libmachine: Using SSH client type: native
	I1013 23:13:04.471452  610490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1013 23:13:04.471480  610490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-670275' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-670275/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-670275' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:13:04.619375  610490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:13:04.619405  610490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:13:04.619442  610490 ubuntu.go:190] setting up certificates
	I1013 23:13:04.619451  610490 provision.go:84] configureAuth start
	I1013 23:13:04.619513  610490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670275
	I1013 23:13:04.636148  610490 provision.go:143] copyHostCerts
	I1013 23:13:04.636218  610490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:13:04.636241  610490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:13:04.636322  610490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:13:04.636429  610490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:13:04.636441  610490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:13:04.636473  610490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:13:04.636542  610490 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:13:04.636553  610490 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:13:04.636578  610490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:13:04.636648  610490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-670275 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-670275]
	I1013 23:13:05.043986  610490 provision.go:177] copyRemoteCerts
	I1013 23:13:05.044066  610490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:13:05.044106  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.062418  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:05.171376  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:13:05.191362  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1013 23:13:05.210080  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 23:13:05.229121  610490 provision.go:87] duration metric: took 609.652895ms to configureAuth
	I1013 23:13:05.229192  610490 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:13:05.229409  610490 config.go:182] Loaded profile config "old-k8s-version-670275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 23:13:05.229544  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.247439  610490 main.go:141] libmachine: Using SSH client type: native
	I1013 23:13:05.247762  610490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1013 23:13:05.247785  610490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:13:05.563027  610490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:13:05.563121  610490 machine.go:96] duration metric: took 4.458561378s to provisionDockerMachine
	I1013 23:13:05.563148  610490 start.go:293] postStartSetup for "old-k8s-version-670275" (driver="docker")
	I1013 23:13:05.563176  610490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:13:05.563279  610490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:13:05.563347  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.583876  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:05.688701  610490 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:13:05.693030  610490 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:13:05.693061  610490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:13:05.693072  610490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:13:05.693127  610490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:13:05.693211  610490 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:13:05.693322  610490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:13:05.700957  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:13:05.718655  610490 start.go:296] duration metric: took 155.47451ms for postStartSetup
	I1013 23:13:05.718757  610490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:13:05.718805  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.735295  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:05.840308  610490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:13:05.845298  610490 fix.go:56] duration metric: took 5.076368294s for fixHost
	I1013 23:13:05.845321  610490 start.go:83] releasing machines lock for "old-k8s-version-670275", held for 5.076419887s
	I1013 23:13:05.845415  610490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670275
	I1013 23:13:05.863326  610490 ssh_runner.go:195] Run: cat /version.json
	I1013 23:13:05.863377  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.863411  610490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:13:05.863471  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:05.882029  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:05.884849  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:06.091138  610490 ssh_runner.go:195] Run: systemctl --version
	I1013 23:13:06.097748  610490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:13:06.135848  610490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:13:06.140223  610490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:13:06.140307  610490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:13:06.148398  610490 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:13:06.148426  610490 start.go:495] detecting cgroup driver to use...
	I1013 23:13:06.148459  610490 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:13:06.148507  610490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:13:06.164036  610490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:13:06.176847  610490 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:13:06.176905  610490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:13:06.192803  610490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:13:06.206515  610490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:13:06.325886  610490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:13:06.447973  610490 docker.go:234] disabling docker service ...
	I1013 23:13:06.448036  610490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:13:06.463061  610490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:13:06.476407  610490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:13:06.589594  610490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:13:06.713882  610490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:13:06.729193  610490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:13:06.745638  610490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1013 23:13:06.745753  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.754867  610490 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:13:06.754968  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.764322  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.773138  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.783683  610490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:13:06.792022  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.801543  610490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.810776  610490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:13:06.820571  610490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:13:06.828467  610490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:13:06.844394  610490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:13:06.955876  610490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:13:07.100453  610490 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:13:07.100611  610490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:13:07.104393  610490 start.go:563] Will wait 60s for crictl version
	I1013 23:13:07.104496  610490 ssh_runner.go:195] Run: which crictl
	I1013 23:13:07.108408  610490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:13:07.134838  610490 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:13:07.134985  610490 ssh_runner.go:195] Run: crio --version
	I1013 23:13:07.163910  610490 ssh_runner.go:195] Run: crio --version
	I1013 23:13:07.198798  610490 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1013 23:13:07.201683  610490 cli_runner.go:164] Run: docker network inspect old-k8s-version-670275 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:13:07.217891  610490 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 23:13:07.221861  610490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:13:07.232091  610490 kubeadm.go:883] updating cluster {Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:13:07.232219  610490 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 23:13:07.232276  610490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:13:07.265579  610490 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:13:07.265604  610490 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:13:07.265684  610490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:13:07.290509  610490 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:13:07.290541  610490 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:13:07.290550  610490 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1013 23:13:07.290643  610490 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-670275 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:13:07.290736  610490 ssh_runner.go:195] Run: crio config
	I1013 23:13:07.358737  610490 cni.go:84] Creating CNI manager for ""
	I1013 23:13:07.358767  610490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:13:07.358794  610490 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:13:07.358822  610490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-670275 NodeName:old-k8s-version-670275 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:13:07.358959  610490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-670275"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:13:07.359033  610490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1013 23:13:07.366873  610490 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:13:07.366965  610490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:13:07.374609  610490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1013 23:13:07.387277  610490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:13:07.402725  610490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1013 23:13:07.415724  610490 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:13:07.419702  610490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:13:07.430318  610490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:13:07.544412  610490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:13:07.566316  610490 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275 for IP: 192.168.85.2
	I1013 23:13:07.566339  610490 certs.go:195] generating shared ca certs ...
	I1013 23:13:07.566358  610490 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:13:07.566504  610490 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:13:07.566559  610490 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:13:07.566572  610490 certs.go:257] generating profile certs ...
	I1013 23:13:07.566658  610490 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.key
	I1013 23:13:07.566730  610490 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.key.d7f6a84a
	I1013 23:13:07.566774  610490 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.key
	I1013 23:13:07.566894  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:13:07.566929  610490 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:13:07.566945  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:13:07.566970  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:13:07.566996  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:13:07.567030  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:13:07.567138  610490 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:13:07.567770  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:13:07.588532  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:13:07.605581  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:13:07.622796  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:13:07.640661  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1013 23:13:07.661434  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 23:13:07.693458  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:13:07.715784  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 23:13:07.742428  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:13:07.768613  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:13:07.795848  610490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:13:07.817497  610490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:13:07.840626  610490 ssh_runner.go:195] Run: openssl version
	I1013 23:13:07.847190  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:13:07.856903  610490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:13:07.860865  610490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:13:07.860959  610490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:13:07.905027  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:13:07.913604  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:13:07.922820  610490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:13:07.926924  610490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:13:07.927043  610490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:13:07.968385  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:13:07.976997  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:13:07.986208  610490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:13:07.989986  610490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:13:07.990061  610490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:13:08.032849  610490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:13:08.041715  610490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:13:08.045833  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:13:08.088162  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:13:08.129905  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:13:08.171259  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:13:08.217154  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:13:08.288273  610490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:13:08.344471  610490 kubeadm.go:400] StartCluster: {Name:old-k8s-version-670275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670275 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:13:08.344613  610490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:13:08.344719  610490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:13:08.414375  610490 cri.go:89] found id: "d6122cdac210528ee69d58eb80b1b66cd55cd8c1862a144d6114d13c9cb9392d"
	I1013 23:13:08.414442  610490 cri.go:89] found id: "f553bdbd313ae656a6206a93c14d248f53c01d7428f445c2ab944a92ca6dd4f4"
	I1013 23:13:08.414461  610490 cri.go:89] found id: ""
	I1013 23:13:08.414562  610490 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 23:13:08.454696  610490 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:13:08Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:13:08.454833  610490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:13:08.473688  610490 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:13:08.473757  610490 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:13:08.473842  610490 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:13:08.485471  610490 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:13:08.486126  610490 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-670275" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:13:08.486450  610490 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-428797/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-670275" cluster setting kubeconfig missing "old-k8s-version-670275" context setting]
	I1013 23:13:08.486941  610490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:13:08.490006  610490 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:13:08.507777  610490 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 23:13:08.507858  610490 kubeadm.go:601] duration metric: took 34.076608ms to restartPrimaryControlPlane
	I1013 23:13:08.507883  610490 kubeadm.go:402] duration metric: took 163.423116ms to StartCluster
	I1013 23:13:08.507928  610490 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:13:08.508013  610490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:13:08.508934  610490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:13:08.509248  610490 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:13:08.509609  610490 config.go:182] Loaded profile config "old-k8s-version-670275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 23:13:08.509685  610490 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:13:08.509943  610490 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-670275"
	I1013 23:13:08.509993  610490 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-670275"
	W1013 23:13:08.510029  610490 addons.go:247] addon storage-provisioner should already be in state true
	I1013 23:13:08.510118  610490 host.go:66] Checking if "old-k8s-version-670275" exists ...
	I1013 23:13:08.510693  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:08.510907  610490 addons.go:69] Setting dashboard=true in profile "old-k8s-version-670275"
	I1013 23:13:08.510948  610490 addons.go:238] Setting addon dashboard=true in "old-k8s-version-670275"
	W1013 23:13:08.510969  610490 addons.go:247] addon dashboard should already be in state true
	I1013 23:13:08.511022  610490 host.go:66] Checking if "old-k8s-version-670275" exists ...
	I1013 23:13:08.511332  610490 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-670275"
	I1013 23:13:08.511361  610490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-670275"
	I1013 23:13:08.511556  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:08.511676  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:08.515154  610490 out.go:179] * Verifying Kubernetes components...
	I1013 23:13:08.521415  610490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:13:08.548895  610490 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 23:13:08.551934  610490 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 23:13:08.557070  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 23:13:08.557099  610490 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 23:13:08.557201  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:08.579980  610490 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-670275"
	W1013 23:13:08.580002  610490 addons.go:247] addon default-storageclass should already be in state true
	I1013 23:13:08.580027  610490 host.go:66] Checking if "old-k8s-version-670275" exists ...
	I1013 23:13:08.580430  610490 cli_runner.go:164] Run: docker container inspect old-k8s-version-670275 --format={{.State.Status}}
	I1013 23:13:08.584639  610490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:13:08.590121  610490 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:13:08.590166  610490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:13:08.590246  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:08.607439  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:08.641051  610490 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:13:08.641069  610490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:13:08.641131  610490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670275
	I1013 23:13:08.643808  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:08.683635  610490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/old-k8s-version-670275/id_rsa Username:docker}
	I1013 23:13:08.853731  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 23:13:08.853807  610490 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 23:13:08.879677  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 23:13:08.879757  610490 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 23:13:08.895738  610490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:13:08.919465  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 23:13:08.919539  610490 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 23:13:08.941521  610490 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-670275" to be "Ready" ...
	I1013 23:13:08.964838  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 23:13:08.964857  610490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 23:13:08.984698  610490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:13:09.020279  610490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:13:09.029899  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 23:13:09.029974  610490 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 23:13:09.088668  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 23:13:09.088737  610490 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 23:13:09.122597  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 23:13:09.122664  610490 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 23:13:09.216046  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 23:13:09.216116  610490 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 23:13:09.299971  610490 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:13:09.300049  610490 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 23:13:09.321818  610490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:13:13.179872  610490 node_ready.go:49] node "old-k8s-version-670275" is "Ready"
	I1013 23:13:13.179904  610490 node_ready.go:38] duration metric: took 4.238291801s for node "old-k8s-version-670275" to be "Ready" ...
	I1013 23:13:13.179919  610490 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:13:13.179981  610490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:13:14.345636  610490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.3609062s)
	I1013 23:13:14.828340  610490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.807966615s)
	I1013 23:13:15.524191  610490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.202289934s)
	I1013 23:13:15.524233  610490 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.344229976s)
	I1013 23:13:15.524258  610490 api_server.go:72] duration metric: took 7.014956088s to wait for apiserver process to appear ...
	I1013 23:13:15.524323  610490 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:13:15.524341  610490 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:13:15.527353  610490 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-670275 addons enable metrics-server
	
	I1013 23:13:15.530427  610490 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1013 23:13:15.533457  610490 addons.go:514] duration metric: took 7.023759829s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1013 23:13:15.534271  610490 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 23:13:15.535913  610490 api_server.go:141] control plane version: v1.28.0
	I1013 23:13:15.535942  610490 api_server.go:131] duration metric: took 11.612135ms to wait for apiserver health ...
	I1013 23:13:15.535951  610490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:13:15.539639  610490 system_pods.go:59] 8 kube-system pods found
	I1013 23:13:15.539678  610490 system_pods.go:61] "coredns-5dd5756b68-9jcbh" [d7fa11f6-6bdd-48d6-b326-81f138997784] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:13:15.539688  610490 system_pods.go:61] "etcd-old-k8s-version-670275" [44443f75-b1be-438c-aee5-66c09080a824] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:13:15.539694  610490 system_pods.go:61] "kindnet-c6xtc" [21a63e23-ce36-4981-bad3-f1386b824908] Running
	I1013 23:13:15.539701  610490 system_pods.go:61] "kube-apiserver-old-k8s-version-670275" [aa8be998-0f28-48dd-b963-816385645b33] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:13:15.539709  610490 system_pods.go:61] "kube-controller-manager-old-k8s-version-670275" [2e068ad0-7a95-459c-af4e-15d9bf83c071] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:13:15.539715  610490 system_pods.go:61] "kube-proxy-2ph29" [95e536a5-7221-4e6f-9c1f-64f77071018a] Running
	I1013 23:13:15.539732  610490 system_pods.go:61] "kube-scheduler-old-k8s-version-670275" [d2014170-66a8-448a-b8ba-0cb3bb12c612] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:13:15.539744  610490 system_pods.go:61] "storage-provisioner" [bf19903c-00c6-4ccc-b9fd-6b0a36356658] Running
	I1013 23:13:15.539750  610490 system_pods.go:74] duration metric: took 3.794392ms to wait for pod list to return data ...
	I1013 23:13:15.539758  610490 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:13:15.542025  610490 default_sa.go:45] found service account: "default"
	I1013 23:13:15.542049  610490 default_sa.go:55] duration metric: took 2.276868ms for default service account to be created ...
	I1013 23:13:15.542058  610490 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:13:15.545464  610490 system_pods.go:86] 8 kube-system pods found
	I1013 23:13:15.545510  610490 system_pods.go:89] "coredns-5dd5756b68-9jcbh" [d7fa11f6-6bdd-48d6-b326-81f138997784] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:13:15.545520  610490 system_pods.go:89] "etcd-old-k8s-version-670275" [44443f75-b1be-438c-aee5-66c09080a824] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:13:15.545527  610490 system_pods.go:89] "kindnet-c6xtc" [21a63e23-ce36-4981-bad3-f1386b824908] Running
	I1013 23:13:15.545535  610490 system_pods.go:89] "kube-apiserver-old-k8s-version-670275" [aa8be998-0f28-48dd-b963-816385645b33] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:13:15.545545  610490 system_pods.go:89] "kube-controller-manager-old-k8s-version-670275" [2e068ad0-7a95-459c-af4e-15d9bf83c071] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:13:15.545553  610490 system_pods.go:89] "kube-proxy-2ph29" [95e536a5-7221-4e6f-9c1f-64f77071018a] Running
	I1013 23:13:15.545559  610490 system_pods.go:89] "kube-scheduler-old-k8s-version-670275" [d2014170-66a8-448a-b8ba-0cb3bb12c612] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:13:15.545566  610490 system_pods.go:89] "storage-provisioner" [bf19903c-00c6-4ccc-b9fd-6b0a36356658] Running
	I1013 23:13:15.545574  610490 system_pods.go:126] duration metric: took 3.510089ms to wait for k8s-apps to be running ...
	I1013 23:13:15.545584  610490 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:13:15.545657  610490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:13:15.559529  610490 system_svc.go:56] duration metric: took 13.935033ms WaitForService to wait for kubelet
	I1013 23:13:15.559561  610490 kubeadm.go:586] duration metric: took 7.050257065s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:13:15.559586  610490 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:13:15.562913  610490 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:13:15.562952  610490 node_conditions.go:123] node cpu capacity is 2
	I1013 23:13:15.562965  610490 node_conditions.go:105] duration metric: took 3.372287ms to run NodePressure ...
	I1013 23:13:15.562976  610490 start.go:241] waiting for startup goroutines ...
	I1013 23:13:15.562984  610490 start.go:246] waiting for cluster config update ...
	I1013 23:13:15.562995  610490 start.go:255] writing updated cluster config ...
	I1013 23:13:15.563339  610490 ssh_runner.go:195] Run: rm -f paused
	I1013 23:13:15.567408  610490 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:13:15.572084  610490 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-9jcbh" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 23:13:17.579711  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:20.078394  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:22.078552  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:24.579559  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:27.077751  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:29.077987  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:31.079794  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:33.580013  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:36.078718  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:38.078921  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:40.079015  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:42.100155  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:44.588639  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:47.079065  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	W1013 23:13:49.578271  610490 pod_ready.go:104] pod "coredns-5dd5756b68-9jcbh" is not "Ready", error: <nil>
	I1013 23:13:50.578463  610490 pod_ready.go:94] pod "coredns-5dd5756b68-9jcbh" is "Ready"
	I1013 23:13:50.578495  610490 pod_ready.go:86] duration metric: took 35.006383806s for pod "coredns-5dd5756b68-9jcbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.582580  610490 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.590536  610490 pod_ready.go:94] pod "etcd-old-k8s-version-670275" is "Ready"
	I1013 23:13:50.590567  610490 pod_ready.go:86] duration metric: took 7.954823ms for pod "etcd-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.593784  610490 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.602137  610490 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-670275" is "Ready"
	I1013 23:13:50.602166  610490 pod_ready.go:86] duration metric: took 8.358131ms for pod "kube-apiserver-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.605845  610490 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.776126  610490 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-670275" is "Ready"
	I1013 23:13:50.776158  610490 pod_ready.go:86] duration metric: took 170.284777ms for pod "kube-controller-manager-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:50.977140  610490 pod_ready.go:83] waiting for pod "kube-proxy-2ph29" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:51.375267  610490 pod_ready.go:94] pod "kube-proxy-2ph29" is "Ready"
	I1013 23:13:51.375313  610490 pod_ready.go:86] duration metric: took 398.14861ms for pod "kube-proxy-2ph29" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:51.576183  610490 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:51.976923  610490 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-670275" is "Ready"
	I1013 23:13:51.976949  610490 pod_ready.go:86] duration metric: took 400.741919ms for pod "kube-scheduler-old-k8s-version-670275" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:13:51.976971  610490 pod_ready.go:40] duration metric: took 36.409529307s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:13:52.055472  610490 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1013 23:13:52.062183  610490 out.go:203] 
	W1013 23:13:52.067059  610490 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1013 23:13:52.071713  610490 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1013 23:13:52.076040  610490 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-670275" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.928342585Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf" id=cb9aeaee-b4cc-436a-bf2a-b9d85ff17653 name=/runtime.v1.ImageService/PullImage
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.930190055Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=7e856331-2002-48b2-b6a7-300274a35550 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.932764296Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp/kubernetes-dashboard" id=5b36e378-d39d-4f5c-96d2-7050a0c0df41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.933837291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.939255921Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.939508543Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f0184549755fc5de37a88d6e76383f5797b677a90aaf5b4a768c439a881f6030/merged/etc/group: no such file or directory"
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.939929679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.957909504Z" level=info msg="Created container 2d3d6a750dbd29406ab4942e0eb47572d1a6ceb79100b58ad20c9fa44224e6e1: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp/kubernetes-dashboard" id=5b36e378-d39d-4f5c-96d2-7050a0c0df41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.96207257Z" level=info msg="Starting container: 2d3d6a750dbd29406ab4942e0eb47572d1a6ceb79100b58ad20c9fa44224e6e1" id=d50e996d-c3d4-43fa-a16b-2178cb5d3b8a name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:13:53 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:53.96989179Z" level=info msg="Started container" PID=1658 containerID=2d3d6a750dbd29406ab4942e0eb47572d1a6ceb79100b58ad20c9fa44224e6e1 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp/kubernetes-dashboard id=d50e996d-c3d4-43fa-a16b-2178cb5d3b8a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2781dfc1213bb0bb2d07dd9b1dec9e4a906ac564ea2121a65d96b924a0d1b5ce
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.64300839Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.65068811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.650723227Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.650746275Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.65479562Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.654959087Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.654992408Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.658196363Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.658228994Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.658254979Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.661693637Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.661757545Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.661780125Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.667997253Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:13:54 old-k8s-version-670275 crio[651]: time="2025-10-13T23:13:54.668593557Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2d3d6a750dbd2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   18 seconds ago       Running             kubernetes-dashboard        0                   2781dfc1213bb       kubernetes-dashboard-8694d4445c-gg5tp            kubernetes-dashboard
	8f768fe52ac67       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   3df08d3a008fa       dashboard-metrics-scraper-5f989dc9cf-9ldqs       kubernetes-dashboard
	e6509a2c244fb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   e33679326303b       storage-provisioner                              kube-system
	7a67a57eec433       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           57 seconds ago       Running             coredns                     1                   6aaa38987d805       coredns-5dd5756b68-9jcbh                         kube-system
	dc720d599612b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   5d48ffc5adac1       busybox                                          default
	c7cc067a35004       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           58 seconds ago       Running             kube-proxy                  1                   cfb91dd263d16       kube-proxy-2ph29                                 kube-system
	5a7cd159eef62       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   c64dd1920f904       kindnet-c6xtc                                    kube-system
	3deee803917a6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   e33679326303b       storage-provisioner                              kube-system
	d6122cdac2105       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   1997e020ecd4b       kube-apiserver-old-k8s-version-670275            kube-system
	fd8abca92b65e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   13a0f75c0dd41       etcd-old-k8s-version-670275                      kube-system
	f553bdbd313ae       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   6fccb6fc01afc       kube-controller-manager-old-k8s-version-670275   kube-system
	11f465f0a5f2a       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   0566d55a5dd60       kube-scheduler-old-k8s-version-670275            kube-system
	
	
	==> coredns [7a67a57eec433712b1a70f2b083b16db62ef0096a63d6df917fb42b8c3e00b88] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59411 - 61115 "HINFO IN 5199209719392204441.1153077157160991146. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029488494s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-670275
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-670275
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=old-k8s-version-670275
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_11_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:11:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-670275
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:14:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:13:43 +0000   Mon, 13 Oct 2025 23:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:13:43 +0000   Mon, 13 Oct 2025 23:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:13:43 +0000   Mon, 13 Oct 2025 23:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:13:43 +0000   Mon, 13 Oct 2025 23:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-670275
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                3228e95e-3de7-463c-a3f6-be9dbc04be1a
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-5dd5756b68-9jcbh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m1s
	  kube-system                 etcd-old-k8s-version-670275                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m13s
	  kube-system                 kindnet-c6xtc                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m1s
	  kube-system                 kube-apiserver-old-k8s-version-670275             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-old-k8s-version-670275    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-proxy-2ph29                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-scheduler-old-k8s-version-670275             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-9ldqs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-gg5tp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 119s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m14s              kubelet          Node old-k8s-version-670275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s              kubelet          Node old-k8s-version-670275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s              kubelet          Node old-k8s-version-670275 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m14s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m1s               node-controller  Node old-k8s-version-670275 event: Registered Node old-k8s-version-670275 in Controller
	  Normal  NodeReady                99s                kubelet          Node old-k8s-version-670275 status is now: NodeReady
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node old-k8s-version-670275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node old-k8s-version-670275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)  kubelet          Node old-k8s-version-670275 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node old-k8s-version-670275 event: Registered Node old-k8s-version-670275 in Controller
	
	
	==> dmesg <==
	[Oct13 22:45] overlayfs: idmapped layers are currently not supported
	[Oct13 22:50] overlayfs: idmapped layers are currently not supported
	[Oct13 22:51] overlayfs: idmapped layers are currently not supported
	[Oct13 22:52] overlayfs: idmapped layers are currently not supported
	[Oct13 22:53] overlayfs: idmapped layers are currently not supported
	[Oct13 22:54] overlayfs: idmapped layers are currently not supported
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fd8abca92b65e2224720afca413ecd65f3d828117b27b543bbf324c4b469d469] <==
	{"level":"info","ts":"2025-10-13T23:13:09.027564Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T23:13:09.027572Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T23:13:09.027763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-13T23:13:09.027819Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-13T23:13:09.027885Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T23:13:09.027911Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T23:13:09.052751Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-13T23:13:09.053143Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-13T23:13:09.053168Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-13T23:13:09.053385Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T23:13:09.053396Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T23:13:10.62057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-13T23:13:10.620756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-13T23:13:10.620814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-13T23:13:10.620861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-13T23:13:10.620894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-13T23:13:10.620931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-13T23:13:10.620965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-13T23:13:10.623322Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-670275 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T23:13:10.623506Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T23:13:10.62464Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-13T23:13:10.627475Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T23:13:10.627701Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T23:13:10.628767Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-13T23:13:10.633696Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:14:12 up  2:56,  0 user,  load average: 1.27, 2.53, 2.32
	Linux old-k8s-version-670275 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5a7cd159eef62e90e54723908fed3e0842527fca41323ee49c8f86d31c4ae5cb] <==
	I1013 23:13:14.486333       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:13:14.486552       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:13:14.486669       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:13:14.486680       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:13:14.486692       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:13:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:13:14.635328       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:13:14.635352       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:13:14.635361       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:13:14.635805       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:13:44.634825       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:13:44.635781       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 23:13:44.635859       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 23:13:44.637034       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1013 23:13:46.136436       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:13:46.136467       1 metrics.go:72] Registering metrics
	I1013 23:13:46.136545       1 controller.go:711] "Syncing nftables rules"
	I1013 23:13:54.642714       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:13:54.642775       1 main.go:301] handling current node
	I1013 23:14:04.638337       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:14:04.638368       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d6122cdac210528ee69d58eb80b1b66cd55cd8c1862a144d6114d13c9cb9392d] <==
	I1013 23:13:13.166572       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1013 23:13:13.167018       1 aggregator.go:166] initial CRD sync complete...
	I1013 23:13:13.167038       1 autoregister_controller.go:141] Starting autoregister controller
	I1013 23:13:13.167046       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 23:13:13.218877       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 23:13:13.257160       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1013 23:13:13.258251       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:13:13.270433       1 cache.go:39] Caches are synced for autoregister controller
	I1013 23:13:13.282372       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1013 23:13:13.283181       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1013 23:13:13.283236       1 shared_informer.go:318] Caches are synced for configmaps
	I1013 23:13:13.283634       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1013 23:13:13.283650       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1013 23:13:13.329193       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:13:14.025780       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:13:15.349743       1 controller.go:624] quota admission added evaluator for: namespaces
	I1013 23:13:15.396386       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1013 23:13:15.422897       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:13:15.433101       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:13:15.445242       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1013 23:13:15.496966       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.117.95"}
	I1013 23:13:15.517069       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.143.180"}
	I1013 23:13:25.997324       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1013 23:13:26.083890       1 controller.go:624] quota admission added evaluator for: endpoints
	I1013 23:13:26.110840       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f553bdbd313ae656a6206a93c14d248f53c01d7428f445c2ab944a92ca6dd4f4] <==
	I1013 23:13:25.792511       1 shared_informer.go:318] Caches are synced for resource quota
	I1013 23:13:26.002059       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1013 23:13:26.009961       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1013 23:13:26.034748       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-9ldqs"
	I1013 23:13:26.034877       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-gg5tp"
	I1013 23:13:26.047211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="38.294541ms"
	I1013 23:13:26.065487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.715993ms"
	I1013 23:13:26.089861       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.538556ms"
	I1013 23:13:26.090052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.386µs"
	I1013 23:13:26.121731       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1013 23:13:26.130112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.498557ms"
	I1013 23:13:26.130270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.021µs"
	I1013 23:13:26.130527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.521µs"
	I1013 23:13:26.134896       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 23:13:26.139491       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 23:13:26.139590       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1013 23:13:32.923373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.083µs"
	I1013 23:13:33.933143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.354µs"
	I1013 23:13:37.276334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="195.073µs"
	I1013 23:13:49.984035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.725µs"
	I1013 23:13:50.510919       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.60604ms"
	I1013 23:13:50.511205       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.315µs"
	I1013 23:13:55.010400       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.087092ms"
	I1013 23:13:55.011133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.106µs"
	I1013 23:13:57.279333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="252.737µs"
	
	
	==> kube-proxy [c7cc067a350042f099fb0283fd178fc3d2dfe4c66947450412f9a359cb5eb276] <==
	I1013 23:13:14.732524       1 server_others.go:69] "Using iptables proxy"
	I1013 23:13:14.771649       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1013 23:13:14.881140       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:13:14.883390       1 server_others.go:152] "Using iptables Proxier"
	I1013 23:13:14.883428       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1013 23:13:14.883436       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1013 23:13:14.883460       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1013 23:13:14.883697       1 server.go:846] "Version info" version="v1.28.0"
	I1013 23:13:14.883707       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:13:14.886092       1 config.go:188] "Starting service config controller"
	I1013 23:13:14.886123       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1013 23:13:14.886141       1 config.go:97] "Starting endpoint slice config controller"
	I1013 23:13:14.886146       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1013 23:13:14.886716       1 config.go:315] "Starting node config controller"
	I1013 23:13:14.886734       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1013 23:13:14.986477       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1013 23:13:14.986540       1 shared_informer.go:318] Caches are synced for service config
	I1013 23:13:14.986819       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [11f465f0a5f2ad80579ad95495a8d238edc5b96f89e351cf305a5ec396507d05] <==
	I1013 23:13:11.192336       1 serving.go:348] Generated self-signed cert in-memory
	I1013 23:13:13.546556       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1013 23:13:13.546649       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:13:13.555325       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1013 23:13:13.555479       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1013 23:13:13.555518       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1013 23:13:13.555559       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1013 23:13:13.557192       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:13:13.563307       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1013 23:13:13.563432       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:13:13.563478       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1013 23:13:13.655890       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1013 23:13:13.664538       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1013 23:13:13.664626       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 13 23:13:26 old-k8s-version-670275 kubelet[775]: E1013 23:13:26.073522     775 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-670275" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-670275' and this object
	Oct 13 23:13:26 old-k8s-version-670275 kubelet[775]: I1013 23:13:26.193557     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/86728516-6908-4c5c-91e7-e39eb9a82389-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-gg5tp\" (UID: \"86728516-6908-4c5c-91e7-e39eb9a82389\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp"
	Oct 13 23:13:26 old-k8s-version-670275 kubelet[775]: I1013 23:13:26.193678     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg58s\" (UniqueName: \"kubernetes.io/projected/86728516-6908-4c5c-91e7-e39eb9a82389-kube-api-access-vg58s\") pod \"kubernetes-dashboard-8694d4445c-gg5tp\" (UID: \"86728516-6908-4c5c-91e7-e39eb9a82389\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp"
	Oct 13 23:13:26 old-k8s-version-670275 kubelet[775]: I1013 23:13:26.193709     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/472e5777-cda5-42db-bcc5-c6cf24d06bce-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-9ldqs\" (UID: \"472e5777-cda5-42db-bcc5-c6cf24d06bce\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs"
	Oct 13 23:13:26 old-k8s-version-670275 kubelet[775]: I1013 23:13:26.193787     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gllz\" (UniqueName: \"kubernetes.io/projected/472e5777-cda5-42db-bcc5-c6cf24d06bce-kube-api-access-2gllz\") pod \"dashboard-metrics-scraper-5f989dc9cf-9ldqs\" (UID: \"472e5777-cda5-42db-bcc5-c6cf24d06bce\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs"
	Oct 13 23:13:27 old-k8s-version-670275 kubelet[775]: W1013 23:13:27.291405     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/crio-3df08d3a008fae5dee90ac2ab92b0efd18d8efc029849ef288252d5600f7bdc3 WatchSource:0}: Error finding container 3df08d3a008fae5dee90ac2ab92b0efd18d8efc029849ef288252d5600f7bdc3: Status 404 returned error can't find the container with id 3df08d3a008fae5dee90ac2ab92b0efd18d8efc029849ef288252d5600f7bdc3
	Oct 13 23:13:27 old-k8s-version-670275 kubelet[775]: W1013 23:13:27.297954     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d5a910fa7ea225e8055406a3af3752476c5f08e587aad3acfad90a5d2dcd1f0d/crio-2781dfc1213bb0bb2d07dd9b1dec9e4a906ac564ea2121a65d96b924a0d1b5ce WatchSource:0}: Error finding container 2781dfc1213bb0bb2d07dd9b1dec9e4a906ac564ea2121a65d96b924a0d1b5ce: Status 404 returned error can't find the container with id 2781dfc1213bb0bb2d07dd9b1dec9e4a906ac564ea2121a65d96b924a0d1b5ce
	Oct 13 23:13:32 old-k8s-version-670275 kubelet[775]: I1013 23:13:32.905809     775 scope.go:117] "RemoveContainer" containerID="aa271ed8151db7e41be5e365bf08753541ea0964eaac4540a2661d5764c8a31d"
	Oct 13 23:13:33 old-k8s-version-670275 kubelet[775]: I1013 23:13:33.910148     775 scope.go:117] "RemoveContainer" containerID="aa271ed8151db7e41be5e365bf08753541ea0964eaac4540a2661d5764c8a31d"
	Oct 13 23:13:33 old-k8s-version-670275 kubelet[775]: I1013 23:13:33.910848     775 scope.go:117] "RemoveContainer" containerID="b3d6efecc93c98b4236d434d1b173493b0f0208c47242e350dfb2ccf3548f66a"
	Oct 13 23:13:33 old-k8s-version-670275 kubelet[775]: E1013 23:13:33.911284     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9ldqs_kubernetes-dashboard(472e5777-cda5-42db-bcc5-c6cf24d06bce)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs" podUID="472e5777-cda5-42db-bcc5-c6cf24d06bce"
	Oct 13 23:13:37 old-k8s-version-670275 kubelet[775]: I1013 23:13:37.262337     775 scope.go:117] "RemoveContainer" containerID="b3d6efecc93c98b4236d434d1b173493b0f0208c47242e350dfb2ccf3548f66a"
	Oct 13 23:13:37 old-k8s-version-670275 kubelet[775]: E1013 23:13:37.262696     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9ldqs_kubernetes-dashboard(472e5777-cda5-42db-bcc5-c6cf24d06bce)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs" podUID="472e5777-cda5-42db-bcc5-c6cf24d06bce"
	Oct 13 23:13:44 old-k8s-version-670275 kubelet[775]: I1013 23:13:44.940487     775 scope.go:117] "RemoveContainer" containerID="3deee803917a6531252a677d172b7f5ab19bc3e562347cfaaaf7100fe8d271a7"
	Oct 13 23:13:49 old-k8s-version-670275 kubelet[775]: I1013 23:13:49.772134     775 scope.go:117] "RemoveContainer" containerID="b3d6efecc93c98b4236d434d1b173493b0f0208c47242e350dfb2ccf3548f66a"
	Oct 13 23:13:49 old-k8s-version-670275 kubelet[775]: I1013 23:13:49.955492     775 scope.go:117] "RemoveContainer" containerID="b3d6efecc93c98b4236d434d1b173493b0f0208c47242e350dfb2ccf3548f66a"
	Oct 13 23:13:49 old-k8s-version-670275 kubelet[775]: I1013 23:13:49.955767     775 scope.go:117] "RemoveContainer" containerID="8f768fe52ac6729d894b48a8d9e10b91b9f1cce278854b88aff76f4210f9da6d"
	Oct 13 23:13:49 old-k8s-version-670275 kubelet[775]: E1013 23:13:49.956098     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9ldqs_kubernetes-dashboard(472e5777-cda5-42db-bcc5-c6cf24d06bce)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs" podUID="472e5777-cda5-42db-bcc5-c6cf24d06bce"
	Oct 13 23:13:57 old-k8s-version-670275 kubelet[775]: I1013 23:13:57.262719     775 scope.go:117] "RemoveContainer" containerID="8f768fe52ac6729d894b48a8d9e10b91b9f1cce278854b88aff76f4210f9da6d"
	Oct 13 23:13:57 old-k8s-version-670275 kubelet[775]: E1013 23:13:57.263564     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9ldqs_kubernetes-dashboard(472e5777-cda5-42db-bcc5-c6cf24d06bce)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9ldqs" podUID="472e5777-cda5-42db-bcc5-c6cf24d06bce"
	Oct 13 23:13:57 old-k8s-version-670275 kubelet[775]: I1013 23:13:57.280040     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gg5tp" podStartSLOduration=4.654652587 podCreationTimestamp="2025-10-13 23:13:26 +0000 UTC" firstStartedPulling="2025-10-13 23:13:27.303319297 +0000 UTC m=+19.739571196" lastFinishedPulling="2025-10-13 23:13:53.928647194 +0000 UTC m=+46.364899093" observedRunningTime="2025-10-13 23:13:55.001073204 +0000 UTC m=+47.437325111" watchObservedRunningTime="2025-10-13 23:13:57.279980484 +0000 UTC m=+49.716232383"
	Oct 13 23:14:07 old-k8s-version-670275 kubelet[775]: I1013 23:14:07.458500     775 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 13 23:14:07 old-k8s-version-670275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:14:07 old-k8s-version-670275 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:14:07 old-k8s-version-670275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2d3d6a750dbd29406ab4942e0eb47572d1a6ceb79100b58ad20c9fa44224e6e1] <==
	2025/10/13 23:13:53 Starting overwatch
	2025/10/13 23:13:53 Using namespace: kubernetes-dashboard
	2025/10/13 23:13:53 Using in-cluster config to connect to apiserver
	2025/10/13 23:13:53 Using secret token for csrf signing
	2025/10/13 23:13:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 23:13:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 23:13:54 Successful initial request to the apiserver, version: v1.28.0
	2025/10/13 23:13:54 Generating JWE encryption key
	2025/10/13 23:13:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 23:13:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 23:13:54 Initializing JWE encryption key from synchronized object
	2025/10/13 23:13:54 Creating in-cluster Sidecar client
	2025/10/13 23:13:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 23:13:54 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [3deee803917a6531252a677d172b7f5ab19bc3e562347cfaaaf7100fe8d271a7] <==
	I1013 23:13:14.591299       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 23:13:44.608612       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e6509a2c244fb78a077b629a507425ed00e44a5cf154bde09ba3a82adad1c173] <==
	I1013 23:13:45.023264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 23:13:45.037482       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 23:13:45.037528       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1013 23:14:02.441543       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 23:14:02.441732       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670275_a80390c9-deea-4309-b8cc-63f5b7afd1ad!
	I1013 23:14:02.442233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90115e5c-bd97-4767-8033-5c05d9173e3c", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-670275_a80390c9-deea-4309-b8cc-63f5b7afd1ad became leader
	I1013 23:14:02.542488       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670275_a80390c9-deea-4309-b8cc-63f5b7afd1ad!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-670275 -n old-k8s-version-670275
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-670275 -n old-k8s-version-670275: exit status 2 (366.160623ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-670275 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-985461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-985461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (281.611778ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:15:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-985461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-985461 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-985461 describe deploy/metrics-server -n kube-system: exit status 1 (89.876116ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-985461 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-985461
helpers_test.go:243: (dbg) docker inspect no-preload-985461:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad",
	        "Created": "2025-10-13T23:14:18.084587368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 614361,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:14:18.16105821Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/hostname",
	        "HostsPath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/hosts",
	        "LogPath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad-json.log",
	        "Name": "/no-preload-985461",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-985461:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-985461",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad",
	                "LowerDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-985461",
	                "Source": "/var/lib/docker/volumes/no-preload-985461/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-985461",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-985461",
	                "name.minikube.sigs.k8s.io": "no-preload-985461",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "01c43f729f40a1b7fb4fa4008da9976f67792d07b0206b1edbbceeae5efa35f6",
	            "SandboxKey": "/var/run/docker/netns/01c43f729f40",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-985461": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:c4:c5:10:8e:2d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d2b0b0019112f54c353afbc5f1c7d7acc1a1a4608af0cb49812ab4cf98cbb0b9",
	                    "EndpointID": "67bd5ac993965e416f687688f45994568a8f4a9781257cf6ffb9b4e060d1d5f6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-985461",
	                        "24722b872d75"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-985461 -n no-preload-985461
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-985461 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-985461 logs -n 25: (1.269914092s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cilium-557095                                                                                                                                                                                                                              │ cilium-557095             │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p force-systemd-env-255188 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-255188  │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ delete  │ -p kubernetes-upgrade-211312                                                                                                                                                                                                                  │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ delete  │ -p force-systemd-env-255188                                                                                                                                                                                                                   │ force-systemd-env-255188  │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-896873    │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p cert-options-051941 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:11 UTC │
	│ ssh     │ cert-options-051941 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ ssh     │ -p cert-options-051941 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ delete  │ -p cert-options-051941                                                                                                                                                                                                                        │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:12 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │                     │
	│ stop    │ -p old-k8s-version-670275 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │ 13 Oct 25 23:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ image   │ old-k8s-version-670275 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ pause   │ -p old-k8s-version-670275 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │                     │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461         │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:15 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-896873    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p cert-expiration-896873                                                                                                                                                                                                                     │ cert-expiration-896873    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482        │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-985461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-985461         │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:14:49
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:14:49.274201  617881 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:14:49.274433  617881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:14:49.274462  617881 out.go:374] Setting ErrFile to fd 2...
	I1013 23:14:49.274482  617881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:14:49.274797  617881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:14:49.275293  617881 out.go:368] Setting JSON to false
	I1013 23:14:49.276266  617881 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10626,"bootTime":1760386664,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:14:49.276364  617881 start.go:141] virtualization:  
	I1013 23:14:49.281358  617881 out.go:179] * [embed-certs-505482] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:14:49.285917  617881 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:14:49.286005  617881 notify.go:220] Checking for updates...
	I1013 23:14:49.292610  617881 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:14:49.295973  617881 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:14:49.299273  617881 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:14:49.302226  617881 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:14:49.305286  617881 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:14:49.308922  617881 config.go:182] Loaded profile config "no-preload-985461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:14:49.309099  617881 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:14:49.356277  617881 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:14:49.356423  617881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:14:49.444603  617881 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-13 23:14:49.435435703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:14:49.444714  617881 docker.go:318] overlay module found
	I1013 23:14:49.447997  617881 out.go:179] * Using the docker driver based on user configuration
	I1013 23:14:49.450864  617881 start.go:305] selected driver: docker
	I1013 23:14:49.450879  617881 start.go:925] validating driver "docker" against <nil>
	I1013 23:14:49.450892  617881 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:14:49.451728  617881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:14:49.552572  617881 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-13 23:14:49.529834653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:14:49.552736  617881 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 23:14:49.552980  617881 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:14:49.556093  617881 out.go:179] * Using Docker driver with root privileges
	I1013 23:14:49.558950  617881 cni.go:84] Creating CNI manager for ""
	I1013 23:14:49.559017  617881 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:14:49.559026  617881 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 23:14:49.559184  617881 start.go:349] cluster config:
	{Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:14:49.564069  617881 out.go:179] * Starting "embed-certs-505482" primary control-plane node in "embed-certs-505482" cluster
	I1013 23:14:49.566952  617881 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:14:49.569863  617881 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:14:49.572780  617881 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:14:49.572851  617881 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:14:49.572860  617881 cache.go:58] Caching tarball of preloaded images
	I1013 23:14:49.572945  617881 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:14:49.572954  617881 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:14:49.573085  617881 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/config.json ...
	I1013 23:14:49.573106  617881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/config.json: {Name:mke0ced70814150c1bd995eea49fe21ceb9a6212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:14:49.573279  617881 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:14:49.594650  617881 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:14:49.594672  617881 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:14:49.594694  617881 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:14:49.594716  617881 start.go:360] acquireMachinesLock for embed-certs-505482: {Name:mk60574f1c53ab31d166b72e157fd38e1fef9702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:14:49.594814  617881 start.go:364] duration metric: took 83.165µs to acquireMachinesLock for "embed-certs-505482"
	I1013 23:14:49.594838  617881 start.go:93] Provisioning new machine with config: &{Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:14:49.594930  617881 start.go:125] createHost starting for "" (driver="docker")
	I1013 23:14:48.594100  614055 out.go:252]   - Generating certificates and keys ...
	I1013 23:14:48.594198  614055 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 23:14:48.594271  614055 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 23:14:48.884207  614055 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 23:14:49.239249  614055 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 23:14:50.395575  614055 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 23:14:51.310586  614055 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 23:14:49.600342  617881 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 23:14:49.600601  617881 start.go:159] libmachine.API.Create for "embed-certs-505482" (driver="docker")
	I1013 23:14:49.600646  617881 client.go:168] LocalClient.Create starting
	I1013 23:14:49.600714  617881 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem
	I1013 23:14:49.600758  617881 main.go:141] libmachine: Decoding PEM data...
	I1013 23:14:49.600772  617881 main.go:141] libmachine: Parsing certificate...
	I1013 23:14:49.600823  617881 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem
	I1013 23:14:49.600840  617881 main.go:141] libmachine: Decoding PEM data...
	I1013 23:14:49.600851  617881 main.go:141] libmachine: Parsing certificate...
	I1013 23:14:49.601204  617881 cli_runner.go:164] Run: docker network inspect embed-certs-505482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 23:14:49.618237  617881 cli_runner.go:211] docker network inspect embed-certs-505482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 23:14:49.618313  617881 network_create.go:284] running [docker network inspect embed-certs-505482] to gather additional debugging logs...
	I1013 23:14:49.618329  617881 cli_runner.go:164] Run: docker network inspect embed-certs-505482
	W1013 23:14:49.636647  617881 cli_runner.go:211] docker network inspect embed-certs-505482 returned with exit code 1
	I1013 23:14:49.636683  617881 network_create.go:287] error running [docker network inspect embed-certs-505482]: docker network inspect embed-certs-505482: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-505482 not found
	I1013 23:14:49.636697  617881 network_create.go:289] output of [docker network inspect embed-certs-505482]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-505482 not found
	
	** /stderr **
	I1013 23:14:49.636829  617881 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:14:49.655825  617881 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-daf8f67114ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:2a:b3:49:6d:63} reservation:<nil>}
	I1013 23:14:49.656041  617881 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-57d99f1e9609 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:17:72:4c:c8:ba} reservation:<nil>}
	I1013 23:14:49.656285  617881 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-33ec4a6ec514 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:b6:7d:bc:fd} reservation:<nil>}
	I1013 23:14:49.656707  617881 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a98e0}
	I1013 23:14:49.656725  617881 network_create.go:124] attempt to create docker network embed-certs-505482 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 23:14:49.656784  617881 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-505482 embed-certs-505482
	I1013 23:14:49.726935  617881 network_create.go:108] docker network embed-certs-505482 192.168.76.0/24 created
	I1013 23:14:49.726964  617881 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-505482" container
	I1013 23:14:49.727037  617881 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 23:14:49.748968  617881 cli_runner.go:164] Run: docker volume create embed-certs-505482 --label name.minikube.sigs.k8s.io=embed-certs-505482 --label created_by.minikube.sigs.k8s.io=true
	I1013 23:14:49.769407  617881 oci.go:103] Successfully created a docker volume embed-certs-505482
	I1013 23:14:49.769508  617881 cli_runner.go:164] Run: docker run --rm --name embed-certs-505482-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-505482 --entrypoint /usr/bin/test -v embed-certs-505482:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 23:14:50.418183  617881 oci.go:107] Successfully prepared a docker volume embed-certs-505482
	I1013 23:14:50.418234  617881 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:14:50.418254  617881 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 23:14:50.418330  617881 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-505482:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 23:14:52.350068  614055 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 23:14:52.350658  614055 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-985461] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 23:14:53.316343  614055 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 23:14:53.316938  614055 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-985461] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 23:14:54.557551  614055 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 23:14:54.945241  614055 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 23:14:55.412407  614055 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 23:14:55.412482  614055 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 23:14:55.963437  614055 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 23:14:57.223488  614055 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 23:14:57.991555  614055 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 23:14:58.299596  614055 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 23:14:58.796173  614055 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 23:14:58.796738  614055 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 23:14:58.799514  614055 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 23:14:55.349760  617881 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-505482:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.931392384s)
	I1013 23:14:55.349796  617881 kic.go:203] duration metric: took 4.931537637s to extract preloaded images to volume ...
	W1013 23:14:55.349941  617881 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 23:14:55.350057  617881 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 23:14:55.431619  617881 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-505482 --name embed-certs-505482 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-505482 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-505482 --network embed-certs-505482 --ip 192.168.76.2 --volume embed-certs-505482:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 23:14:55.866128  617881 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Running}}
	I1013 23:14:55.894607  617881 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:14:55.922566  617881 cli_runner.go:164] Run: docker exec embed-certs-505482 stat /var/lib/dpkg/alternatives/iptables
	I1013 23:14:55.994618  617881 oci.go:144] the created container "embed-certs-505482" has a running status.
	I1013 23:14:55.994654  617881 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa...
	I1013 23:14:56.810925  617881 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 23:14:56.833973  617881 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:14:56.856742  617881 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 23:14:56.856767  617881 kic_runner.go:114] Args: [docker exec --privileged embed-certs-505482 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 23:14:56.926099  617881 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:14:56.965618  617881 machine.go:93] provisionDockerMachine start ...
	I1013 23:14:56.965723  617881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:14:56.994307  617881 main.go:141] libmachine: Using SSH client type: native
	I1013 23:14:56.994648  617881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1013 23:14:56.994658  617881 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:14:56.995359  617881 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48776->127.0.0.1:33459: read: connection reset by peer
	I1013 23:14:58.803028  614055 out.go:252]   - Booting up control plane ...
	I1013 23:14:58.803164  614055 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 23:14:58.803256  614055 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 23:14:58.804300  614055 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 23:14:58.820816  614055 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 23:14:58.820944  614055 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 23:14:58.829522  614055 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 23:14:58.830740  614055 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 23:14:58.830807  614055 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 23:14:58.965585  614055 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 23:14:58.965728  614055 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 23:14:59.966872  614055 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001322675s
	I1013 23:14:59.971310  614055 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 23:14:59.971410  614055 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 23:14:59.971728  614055 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 23:14:59.971819  614055 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 23:15:00.354602  617881 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-505482
	
	I1013 23:15:00.354630  617881 ubuntu.go:182] provisioning hostname "embed-certs-505482"
	I1013 23:15:00.354703  617881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:15:00.404074  617881 main.go:141] libmachine: Using SSH client type: native
	I1013 23:15:00.414724  617881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1013 23:15:00.414769  617881 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-505482 && echo "embed-certs-505482" | sudo tee /etc/hostname
	I1013 23:15:00.689055  617881 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-505482
	
	I1013 23:15:00.689145  617881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:15:00.715047  617881 main.go:141] libmachine: Using SSH client type: native
	I1013 23:15:00.715395  617881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1013 23:15:00.715421  617881 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-505482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-505482/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-505482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:15:00.901917  617881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:15:00.901942  617881 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:15:00.901963  617881 ubuntu.go:190] setting up certificates
	I1013 23:15:00.901974  617881 provision.go:84] configureAuth start
	I1013 23:15:00.902062  617881 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-505482
	I1013 23:15:00.939116  617881 provision.go:143] copyHostCerts
	I1013 23:15:00.939186  617881 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:15:00.939196  617881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:15:00.939283  617881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:15:00.939387  617881 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:15:00.939394  617881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:15:00.939421  617881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:15:00.939487  617881 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:15:00.939492  617881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:15:00.939517  617881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:15:00.939580  617881 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.embed-certs-505482 san=[127.0.0.1 192.168.76.2 embed-certs-505482 localhost minikube]
	I1013 23:15:01.287972  617881 provision.go:177] copyRemoteCerts
	I1013 23:15:01.288089  617881 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:15:01.288164  617881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:15:01.307173  617881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:15:01.436630  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:15:01.477045  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1013 23:15:01.509210  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:15:01.537227  617881 provision.go:87] duration metric: took 635.225703ms to configureAuth
	I1013 23:15:01.537252  617881 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:15:01.537445  617881 config.go:182] Loaded profile config "embed-certs-505482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:15:01.537560  617881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:15:01.572616  617881 main.go:141] libmachine: Using SSH client type: native
	I1013 23:15:01.572940  617881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1013 23:15:01.572961  617881 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:15:01.986900  617881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:15:01.986986  617881 machine.go:96] duration metric: took 5.021337629s to provisionDockerMachine
	I1013 23:15:01.987012  617881 client.go:171] duration metric: took 12.386358926s to LocalClient.Create
	I1013 23:15:01.987069  617881 start.go:167] duration metric: took 12.38647034s to libmachine.API.Create "embed-certs-505482"
	I1013 23:15:01.987177  617881 start.go:293] postStartSetup for "embed-certs-505482" (driver="docker")
	I1013 23:15:01.987203  617881 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:15:01.987298  617881 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:15:01.987383  617881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:15:02.015983  617881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:15:02.137599  617881 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:15:02.141857  617881 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:15:02.141885  617881 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:15:02.141896  617881 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:15:02.141979  617881 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:15:02.142062  617881 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:15:02.142160  617881 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:15:02.165557  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:15:02.192261  617881 start.go:296] duration metric: took 205.052443ms for postStartSetup
	I1013 23:15:02.192669  617881 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-505482
	I1013 23:15:02.227350  617881 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/config.json ...
	I1013 23:15:02.227668  617881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:15:02.227735  617881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:15:02.260808  617881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:15:02.383634  617881 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:15:02.389033  617881 start.go:128] duration metric: took 12.794086088s to createHost
	I1013 23:15:02.389059  617881 start.go:83] releasing machines lock for "embed-certs-505482", held for 12.794236936s
	I1013 23:15:02.389129  617881 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-505482
	I1013 23:15:02.416640  617881 ssh_runner.go:195] Run: cat /version.json
	I1013 23:15:02.416692  617881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:15:02.416953  617881 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:15:02.417015  617881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:15:02.457885  617881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:15:02.460192  617881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:15:02.750458  617881 ssh_runner.go:195] Run: systemctl --version
	I1013 23:15:02.757508  617881 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:15:02.820480  617881 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:15:02.827593  617881 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:15:02.827682  617881 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:15:02.865105  617881 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 23:15:02.865137  617881 start.go:495] detecting cgroup driver to use...
	I1013 23:15:02.865171  617881 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:15:02.865237  617881 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:15:02.891864  617881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:15:02.916039  617881 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:15:02.916117  617881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:15:02.942535  617881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:15:02.962800  617881 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:15:03.189905  617881 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:15:03.408369  617881 docker.go:234] disabling docker service ...
	I1013 23:15:03.408447  617881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:15:03.454248  617881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:15:03.478300  617881 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:15:03.685419  617881 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:15:03.899853  617881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:15:03.921267  617881 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:15:03.948071  617881 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:15:03.948217  617881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:15:03.965607  617881 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:15:03.965731  617881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:15:03.981704  617881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:15:03.995693  617881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:15:04.013725  617881 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:15:04.033009  617881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:15:04.046819  617881 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:15:04.080305  617881 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:15:04.092933  617881 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:15:04.104132  617881 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:15:04.116479  617881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:15:04.342870  617881 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:15:04.572104  617881 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:15:04.572226  617881 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:15:04.578764  617881 start.go:563] Will wait 60s for crictl version
	I1013 23:15:04.578877  617881 ssh_runner.go:195] Run: which crictl
	I1013 23:15:04.584964  617881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:15:04.654665  617881 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:15:04.654824  617881 ssh_runner.go:195] Run: crio --version
	I1013 23:15:04.712767  617881 ssh_runner.go:195] Run: crio --version
	I1013 23:15:04.768242  617881 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:15:04.771260  617881 cli_runner.go:164] Run: docker network inspect embed-certs-505482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:15:04.793884  617881 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 23:15:04.798703  617881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:15:04.814911  617881 kubeadm.go:883] updating cluster {Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:15:04.815036  617881 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:15:04.815143  617881 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:15:04.891601  617881 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:15:04.891621  617881 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:15:04.891678  617881 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:15:04.930030  617881 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:15:04.930094  617881 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:15:04.930116  617881 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 23:15:04.930234  617881 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-505482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:15:04.930333  617881 ssh_runner.go:195] Run: crio config
	I1013 23:15:05.059367  617881 cni.go:84] Creating CNI manager for ""
	I1013 23:15:05.059432  617881 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:15:05.059467  617881 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:15:05.059521  617881 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-505482 NodeName:embed-certs-505482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:15:05.059674  617881 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-505482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:15:05.059763  617881 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:15:05.072144  617881 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:15:05.072279  617881 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:15:05.084133  617881 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1013 23:15:05.122792  617881 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:15:05.137888  617881 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1013 23:15:05.161988  617881 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:15:05.166077  617881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:15:05.186090  617881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:15:05.379894  617881 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:15:05.410641  617881 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482 for IP: 192.168.76.2
	I1013 23:15:05.410720  617881 certs.go:195] generating shared ca certs ...
	I1013 23:15:05.410754  617881 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:15:05.410960  617881 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:15:05.411036  617881 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:15:05.411075  617881 certs.go:257] generating profile certs ...
	I1013 23:15:05.411191  617881 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/client.key
	I1013 23:15:05.411242  617881 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/client.crt with IP's: []
	I1013 23:15:06.300310  617881 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/client.crt ...
	I1013 23:15:06.300377  617881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/client.crt: {Name:mk868f5936d2f0b3a57b6921c77f413431f479cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:15:06.300637  617881 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/client.key ...
	I1013 23:15:06.300675  617881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/client.key: {Name:mka37db863f813f5d4e2e6f91966701cdf3502ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:15:06.300846  617881 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.key.049067f4
	I1013 23:15:06.300888  617881 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.crt.049067f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 23:15:06.449128  617881 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.crt.049067f4 ...
	I1013 23:15:06.449156  617881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.crt.049067f4: {Name:mkee2a6853a797f023e13be0a1e6f899c74b1535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:15:06.449340  617881 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.key.049067f4 ...
	I1013 23:15:06.449350  617881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.key.049067f4: {Name:mk7d00c0b96f18153dae17bd63af3532b68b27f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:15:06.449428  617881 certs.go:382] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.crt.049067f4 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.crt
	I1013 23:15:06.449504  617881 certs.go:386] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.key.049067f4 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.key
	I1013 23:15:06.449559  617881 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.key
	I1013 23:15:06.449571  617881 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.crt with IP's: []
	I1013 23:15:07.120778  617881 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.crt ...
	I1013 23:15:07.120850  617881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.crt: {Name:mke3aaa60467b09791658463fd92bf45374fdacb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:15:07.121085  617881 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.key ...
	I1013 23:15:07.121122  617881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.key: {Name:mkf64b254e45d4ab5009ca9113bc35968f6b7544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:15:07.121413  617881 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:15:07.121482  617881 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:15:07.121507  617881 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:15:07.121562  617881 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:15:07.121614  617881 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:15:07.121671  617881 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:15:07.121746  617881 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:15:07.122394  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:15:07.151704  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:15:07.191557  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:15:07.224451  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:15:07.277439  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1013 23:15:07.320860  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 23:15:07.367713  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:15:07.396418  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 23:15:07.428419  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:15:07.460103  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:15:07.495224  617881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:15:07.529094  617881 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:15:07.559455  617881 ssh_runner.go:195] Run: openssl version
	I1013 23:15:07.567290  617881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:15:07.583607  617881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:15:07.590400  617881 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:15:07.590517  617881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:15:07.656875  617881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:15:07.666166  617881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:15:07.680330  617881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:15:07.684788  617881 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:15:07.684910  617881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:15:07.734731  617881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:15:07.745944  617881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:15:07.762124  617881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:15:07.766306  617881 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:15:07.766418  617881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:15:07.829526  617881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:15:07.838539  617881 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:15:07.848514  617881 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 23:15:07.848624  617881 kubeadm.go:400] StartCluster: {Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:15:07.848750  617881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:15:07.848840  617881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:15:07.903901  617881 cri.go:89] found id: ""
	I1013 23:15:07.904045  617881 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:15:07.914660  617881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 23:15:07.928401  617881 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 23:15:07.928523  617881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 23:15:07.939887  617881 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 23:15:07.939957  617881 kubeadm.go:157] found existing configuration files:
	
	I1013 23:15:07.940040  617881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 23:15:07.953557  617881 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 23:15:07.953680  617881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 23:15:07.970648  617881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 23:15:07.996167  617881 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 23:15:07.996288  617881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 23:15:08.014689  617881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 23:15:08.033058  617881 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 23:15:08.033185  617881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 23:15:08.055215  617881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 23:15:08.074567  617881 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 23:15:08.074701  617881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 23:15:08.093208  617881 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 23:15:08.180284  617881 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 23:15:08.180350  617881 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 23:15:08.226107  617881 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 23:15:08.226191  617881 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 23:15:08.226234  617881 kubeadm.go:318] OS: Linux
	I1013 23:15:08.226286  617881 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 23:15:08.226352  617881 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 23:15:08.226406  617881 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 23:15:08.226460  617881 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 23:15:08.226514  617881 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 23:15:08.226568  617881 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 23:15:08.226619  617881 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 23:15:08.226674  617881 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 23:15:08.226726  617881 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 23:15:08.335792  617881 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 23:15:08.335923  617881 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 23:15:08.336022  617881 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 23:15:08.357292  617881 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 23:15:08.362354  617881 out.go:252]   - Generating certificates and keys ...
	I1013 23:15:08.362461  617881 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 23:15:08.362532  617881 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 23:15:08.481224  617881 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 23:15:07.931706  614055 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 7.95934414s
	I1013 23:15:09.652070  614055 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.6807847s
	I1013 23:15:10.473009  614055 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.501667513s
	I1013 23:15:10.500572  614055 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 23:15:10.519820  614055 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 23:15:10.546376  614055 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 23:15:10.546593  614055 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-985461 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 23:15:10.588037  614055 kubeadm.go:318] [bootstrap-token] Using token: 632hc9.979r4f1gpffic4j6
	I1013 23:15:10.590932  614055 out.go:252]   - Configuring RBAC rules ...
	I1013 23:15:10.591071  614055 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 23:15:10.601168  614055 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 23:15:10.610830  614055 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 23:15:10.619145  614055 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 23:15:10.624151  614055 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 23:15:10.628660  614055 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 23:15:10.881800  614055 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 23:15:11.392592  614055 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 23:15:11.903435  614055 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 23:15:11.905027  614055 kubeadm.go:318] 
	I1013 23:15:11.905113  614055 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 23:15:11.905120  614055 kubeadm.go:318] 
	I1013 23:15:11.905200  614055 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 23:15:11.905205  614055 kubeadm.go:318] 
	I1013 23:15:11.905231  614055 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 23:15:11.905675  614055 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 23:15:11.905742  614055 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 23:15:11.905748  614055 kubeadm.go:318] 
	I1013 23:15:11.905804  614055 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 23:15:11.905809  614055 kubeadm.go:318] 
	I1013 23:15:11.905858  614055 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 23:15:11.905862  614055 kubeadm.go:318] 
	I1013 23:15:11.905916  614055 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 23:15:11.905994  614055 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 23:15:11.906065  614055 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 23:15:11.906070  614055 kubeadm.go:318] 
	I1013 23:15:11.906355  614055 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 23:15:11.906442  614055 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 23:15:11.906447  614055 kubeadm.go:318] 
	I1013 23:15:11.906785  614055 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 632hc9.979r4f1gpffic4j6 \
	I1013 23:15:11.906899  614055 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 \
	I1013 23:15:11.908948  614055 kubeadm.go:318] 	--control-plane 
	I1013 23:15:11.908961  614055 kubeadm.go:318] 
	I1013 23:15:11.909279  614055 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 23:15:11.909289  614055 kubeadm.go:318] 
	I1013 23:15:11.909582  614055 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 632hc9.979r4f1gpffic4j6 \
	I1013 23:15:11.909893  614055 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 
	I1013 23:15:11.917055  614055 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 23:15:11.917288  614055 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 23:15:11.917396  614055 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 23:15:11.917412  614055 cni.go:84] Creating CNI manager for ""
	I1013 23:15:11.917420  614055 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:15:11.921771  614055 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 23:15:09.406936  617881 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 23:15:09.913856  617881 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 23:15:10.821204  617881 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 23:15:11.145519  617881 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 23:15:11.146061  617881 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-505482 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 23:15:12.111479  617881 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 23:15:12.112016  617881 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-505482 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 23:15:12.573923  617881 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 23:15:12.636091  617881 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 23:15:12.843433  617881 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 23:15:12.843939  617881 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 23:15:13.378605  617881 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 23:15:14.223765  617881 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 23:15:14.746691  617881 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 23:15:15.401273  617881 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 23:15:15.808514  617881 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 23:15:15.812028  617881 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 23:15:15.812121  617881 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 23:15:11.924692  614055 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 23:15:11.931988  614055 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 23:15:11.932006  614055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 23:15:11.947455  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 23:15:12.314329  614055 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 23:15:12.314457  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:12.314524  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-985461 minikube.k8s.io/updated_at=2025_10_13T23_15_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=no-preload-985461 minikube.k8s.io/primary=true
	I1013 23:15:12.618062  614055 ops.go:34] apiserver oom_adj: -16
	I1013 23:15:12.618170  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:13.118311  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:13.618798  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:14.118417  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:14.618251  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:15.118474  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:15.619105  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:16.118306  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:16.618278  614055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:16.825965  614055 kubeadm.go:1113] duration metric: took 4.511550858s to wait for elevateKubeSystemPrivileges
	I1013 23:15:16.825992  614055 kubeadm.go:402] duration metric: took 28.611255297s to StartCluster
	I1013 23:15:16.826013  614055 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:15:16.826076  614055 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:15:16.826721  614055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:15:16.826930  614055 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:15:16.827122  614055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 23:15:16.827360  614055 config.go:182] Loaded profile config "no-preload-985461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:15:16.827394  614055 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:15:16.827461  614055 addons.go:69] Setting storage-provisioner=true in profile "no-preload-985461"
	I1013 23:15:16.827475  614055 addons.go:238] Setting addon storage-provisioner=true in "no-preload-985461"
	I1013 23:15:16.827497  614055 host.go:66] Checking if "no-preload-985461" exists ...
	I1013 23:15:16.827986  614055 cli_runner.go:164] Run: docker container inspect no-preload-985461 --format={{.State.Status}}
	I1013 23:15:16.828699  614055 addons.go:69] Setting default-storageclass=true in profile "no-preload-985461"
	I1013 23:15:16.828727  614055 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-985461"
	I1013 23:15:16.829009  614055 cli_runner.go:164] Run: docker container inspect no-preload-985461 --format={{.State.Status}}
	I1013 23:15:16.833467  614055 out.go:179] * Verifying Kubernetes components...
	I1013 23:15:16.839232  614055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:15:16.869232  614055 addons.go:238] Setting addon default-storageclass=true in "no-preload-985461"
	I1013 23:15:16.869278  614055 host.go:66] Checking if "no-preload-985461" exists ...
	I1013 23:15:16.869722  614055 cli_runner.go:164] Run: docker container inspect no-preload-985461 --format={{.State.Status}}
	I1013 23:15:16.882081  614055 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:15:16.886463  614055 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:15:16.886488  614055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:15:16.886581  614055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:15:16.906385  614055 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:15:16.906407  614055 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:15:16.906469  614055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:15:16.932735  614055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/no-preload-985461/id_rsa Username:docker}
	I1013 23:15:16.947069  614055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/no-preload-985461/id_rsa Username:docker}
	I1013 23:15:17.426117  614055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:15:17.431520  614055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:15:17.478671  614055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 23:15:17.478821  614055 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:15:19.065156  614055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.633590485s)
	I1013 23:15:19.065366  614055 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.586670993s)
	I1013 23:15:19.065391  614055 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1013 23:15:19.065630  614055 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.586781383s)
	I1013 23:15:19.066310  614055 node_ready.go:35] waiting up to 6m0s for node "no-preload-985461" to be "Ready" ...
	I1013 23:15:19.068308  614055 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1013 23:15:15.816436  617881 out.go:252]   - Booting up control plane ...
	I1013 23:15:15.816566  617881 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 23:15:15.816691  617881 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 23:15:15.816849  617881 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 23:15:15.835588  617881 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 23:15:15.836003  617881 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 23:15:15.845697  617881 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 23:15:15.845809  617881 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 23:15:15.845960  617881 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 23:15:15.995914  617881 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 23:15:15.996039  617881 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 23:15:16.997135  617881 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001324138s
	I1013 23:15:17.011980  617881 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 23:15:17.012098  617881 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1013 23:15:17.012201  617881 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 23:15:17.012289  617881 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 23:15:19.071041  614055 addons.go:514] duration metric: took 2.24363656s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1013 23:15:19.570244  614055 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-985461" context rescaled to 1 replicas
	W1013 23:15:21.069259  614055 node_ready.go:57] node "no-preload-985461" has "Ready":"False" status (will retry)
	I1013 23:15:22.353985  617881 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.340526576s
	I1013 23:15:24.844401  617881 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.832442718s
	I1013 23:15:26.013997  617881 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.001734842s
	I1013 23:15:26.036022  617881 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 23:15:26.053001  617881 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 23:15:26.073137  617881 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 23:15:26.073347  617881 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-505482 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 23:15:26.089225  617881 kubeadm.go:318] [bootstrap-token] Using token: e2ey2s.3uft172qfqlmfogh
	W1013 23:15:23.069689  614055 node_ready.go:57] node "no-preload-985461" has "Ready":"False" status (will retry)
	W1013 23:15:25.569513  614055 node_ready.go:57] node "no-preload-985461" has "Ready":"False" status (will retry)
	I1013 23:15:26.092294  617881 out.go:252]   - Configuring RBAC rules ...
	I1013 23:15:26.092434  617881 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 23:15:26.097888  617881 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 23:15:26.111802  617881 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 23:15:26.118447  617881 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 23:15:26.123208  617881 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 23:15:26.127808  617881 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 23:15:26.422804  617881 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 23:15:26.885325  617881 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 23:15:27.425938  617881 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 23:15:27.427198  617881 kubeadm.go:318] 
	I1013 23:15:27.427275  617881 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 23:15:27.427280  617881 kubeadm.go:318] 
	I1013 23:15:27.427361  617881 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 23:15:27.427365  617881 kubeadm.go:318] 
	I1013 23:15:27.427391  617881 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 23:15:27.427452  617881 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 23:15:27.427505  617881 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 23:15:27.427509  617881 kubeadm.go:318] 
	I1013 23:15:27.427565  617881 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 23:15:27.427570  617881 kubeadm.go:318] 
	I1013 23:15:27.427620  617881 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 23:15:27.427624  617881 kubeadm.go:318] 
	I1013 23:15:27.427678  617881 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 23:15:27.427835  617881 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 23:15:27.427909  617881 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 23:15:27.427914  617881 kubeadm.go:318] 
	I1013 23:15:27.428002  617881 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 23:15:27.428082  617881 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 23:15:27.428086  617881 kubeadm.go:318] 
	I1013 23:15:27.428173  617881 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token e2ey2s.3uft172qfqlmfogh \
	I1013 23:15:27.428280  617881 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 \
	I1013 23:15:27.428303  617881 kubeadm.go:318] 	--control-plane 
	I1013 23:15:27.428307  617881 kubeadm.go:318] 
	I1013 23:15:27.428396  617881 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 23:15:27.428400  617881 kubeadm.go:318] 
	I1013 23:15:27.428494  617881 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token e2ey2s.3uft172qfqlmfogh \
	I1013 23:15:27.428600  617881 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 
	I1013 23:15:27.433312  617881 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 23:15:27.433550  617881 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 23:15:27.433734  617881 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 23:15:27.433763  617881 cni.go:84] Creating CNI manager for ""
	I1013 23:15:27.433772  617881 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:15:27.437898  617881 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 23:15:27.440807  617881 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 23:15:27.445495  617881 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 23:15:27.445518  617881 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 23:15:27.464004  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 23:15:27.852947  617881 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 23:15:27.853094  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:27.853175  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-505482 minikube.k8s.io/updated_at=2025_10_13T23_15_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=embed-certs-505482 minikube.k8s.io/primary=true
	I1013 23:15:28.023280  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:28.023386  617881 ops.go:34] apiserver oom_adj: -16
	I1013 23:15:28.524319  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:29.023602  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1013 23:15:28.068906  614055 node_ready.go:57] node "no-preload-985461" has "Ready":"False" status (will retry)
	W1013 23:15:30.079193  614055 node_ready.go:57] node "no-preload-985461" has "Ready":"False" status (will retry)
	I1013 23:15:29.523915  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:30.024214  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:30.523816  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:31.023676  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:31.524296  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:32.023786  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:32.523396  617881 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:15:32.652752  617881 kubeadm.go:1113] duration metric: took 4.799704648s to wait for elevateKubeSystemPrivileges
	I1013 23:15:32.652778  617881 kubeadm.go:402] duration metric: took 24.804160295s to StartCluster
	I1013 23:15:32.652795  617881 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:15:32.652863  617881 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:15:32.654243  617881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:15:32.654453  617881 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:15:32.654555  617881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 23:15:32.654782  617881 config.go:182] Loaded profile config "embed-certs-505482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:15:32.654813  617881 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:15:32.654873  617881 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-505482"
	I1013 23:15:32.654887  617881 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-505482"
	I1013 23:15:32.654907  617881 host.go:66] Checking if "embed-certs-505482" exists ...
	I1013 23:15:32.655119  617881 addons.go:69] Setting default-storageclass=true in profile "embed-certs-505482"
	I1013 23:15:32.655148  617881 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-505482"
	I1013 23:15:32.655509  617881 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:15:32.656154  617881 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:15:32.663137  617881 out.go:179] * Verifying Kubernetes components...
	I1013 23:15:32.666286  617881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:15:32.699343  617881 addons.go:238] Setting addon default-storageclass=true in "embed-certs-505482"
	I1013 23:15:32.699382  617881 host.go:66] Checking if "embed-certs-505482" exists ...
	I1013 23:15:32.699819  617881 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:15:32.702240  617881 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:15:32.705144  617881 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:15:32.705166  617881 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:15:32.705233  617881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:15:32.736251  617881 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:15:32.736273  617881 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:15:32.736335  617881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:15:32.761528  617881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:15:32.769303  617881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:15:33.032866  617881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 23:15:33.032977  617881 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:15:33.124200  617881 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:15:33.129968  617881 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:15:33.749715  617881 node_ready.go:35] waiting up to 6m0s for node "embed-certs-505482" to be "Ready" ...
	I1013 23:15:33.750057  617881 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1013 23:15:34.044676  617881 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 23:15:34.048596  617881 addons.go:514] duration metric: took 1.393768624s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 23:15:34.254050  617881 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-505482" context rescaled to 1 replicas
	W1013 23:15:32.570103  614055 node_ready.go:57] node "no-preload-985461" has "Ready":"False" status (will retry)
	W1013 23:15:35.070377  614055 node_ready.go:57] node "no-preload-985461" has "Ready":"False" status (will retry)
	W1013 23:15:35.752516  617881 node_ready.go:57] node "embed-certs-505482" has "Ready":"False" status (will retry)
	W1013 23:15:37.752667  617881 node_ready.go:57] node "embed-certs-505482" has "Ready":"False" status (will retry)
	W1013 23:15:37.569572  614055 node_ready.go:57] node "no-preload-985461" has "Ready":"False" status (will retry)
	W1013 23:15:40.070060  614055 node_ready.go:57] node "no-preload-985461" has "Ready":"False" status (will retry)
	W1013 23:15:39.753378  617881 node_ready.go:57] node "embed-certs-505482" has "Ready":"False" status (will retry)
	W1013 23:15:42.254587  617881 node_ready.go:57] node "embed-certs-505482" has "Ready":"False" status (will retry)
	I1013 23:15:42.069515  614055 node_ready.go:49] node "no-preload-985461" is "Ready"
	I1013 23:15:42.069550  614055 node_ready.go:38] duration metric: took 23.003224041s for node "no-preload-985461" to be "Ready" ...
	I1013 23:15:42.069565  614055 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:15:42.069631  614055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:15:42.104865  614055 api_server.go:72] duration metric: took 25.27790506s to wait for apiserver process to appear ...
	I1013 23:15:42.104894  614055 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:15:42.104916  614055 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:15:42.130325  614055 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 23:15:42.132258  614055 api_server.go:141] control plane version: v1.34.1
	I1013 23:15:42.132359  614055 api_server.go:131] duration metric: took 27.456807ms to wait for apiserver health ...
	I1013 23:15:42.132389  614055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:15:42.140704  614055 system_pods.go:59] 8 kube-system pods found
	I1013 23:15:42.140746  614055 system_pods.go:61] "coredns-66bc5c9577-qz7kw" [3f6fa1b6-74d1-4d56-9f5c-0f06cc769b2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:15:42.140753  614055 system_pods.go:61] "etcd-no-preload-985461" [bbbecc6d-9254-4d56-a422-b727f5fce084] Running
	I1013 23:15:42.140760  614055 system_pods.go:61] "kindnet-ljpdl" [374474f9-4eef-4142-b969-273938b503bf] Running
	I1013 23:15:42.140764  614055 system_pods.go:61] "kube-apiserver-no-preload-985461" [46ade205-4140-4846-afcb-8541a9dd00cd] Running
	I1013 23:15:42.140770  614055 system_pods.go:61] "kube-controller-manager-no-preload-985461" [b35785ac-5c01-4626-9f42-e6be92bad7fd] Running
	I1013 23:15:42.140775  614055 system_pods.go:61] "kube-proxy-24vhq" [186209cf-4abd-4f9d-925d-5ace9f59c705] Running
	I1013 23:15:42.140779  614055 system_pods.go:61] "kube-scheduler-no-preload-985461" [323f1904-f3e7-4509-9d7f-769877f7ab38] Running
	I1013 23:15:42.140786  614055 system_pods.go:61] "storage-provisioner" [dd5ba110-c29a-4fc8-b404-86e61c57b62f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:15:42.140792  614055 system_pods.go:74] duration metric: took 8.365648ms to wait for pod list to return data ...
	I1013 23:15:42.140802  614055 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:15:42.148487  614055 default_sa.go:45] found service account: "default"
	I1013 23:15:42.148580  614055 default_sa.go:55] duration metric: took 7.770911ms for default service account to be created ...
	I1013 23:15:42.148607  614055 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:15:42.155729  614055 system_pods.go:86] 8 kube-system pods found
	I1013 23:15:42.155844  614055 system_pods.go:89] "coredns-66bc5c9577-qz7kw" [3f6fa1b6-74d1-4d56-9f5c-0f06cc769b2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:15:42.155876  614055 system_pods.go:89] "etcd-no-preload-985461" [bbbecc6d-9254-4d56-a422-b727f5fce084] Running
	I1013 23:15:42.155919  614055 system_pods.go:89] "kindnet-ljpdl" [374474f9-4eef-4142-b969-273938b503bf] Running
	I1013 23:15:42.155955  614055 system_pods.go:89] "kube-apiserver-no-preload-985461" [46ade205-4140-4846-afcb-8541a9dd00cd] Running
	I1013 23:15:42.155993  614055 system_pods.go:89] "kube-controller-manager-no-preload-985461" [b35785ac-5c01-4626-9f42-e6be92bad7fd] Running
	I1013 23:15:42.156022  614055 system_pods.go:89] "kube-proxy-24vhq" [186209cf-4abd-4f9d-925d-5ace9f59c705] Running
	I1013 23:15:42.156044  614055 system_pods.go:89] "kube-scheduler-no-preload-985461" [323f1904-f3e7-4509-9d7f-769877f7ab38] Running
	I1013 23:15:42.156086  614055 system_pods.go:89] "storage-provisioner" [dd5ba110-c29a-4fc8-b404-86e61c57b62f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:15:42.156175  614055 retry.go:31] will retry after 308.008939ms: missing components: kube-dns
	I1013 23:15:42.469089  614055 system_pods.go:86] 8 kube-system pods found
	I1013 23:15:42.469131  614055 system_pods.go:89] "coredns-66bc5c9577-qz7kw" [3f6fa1b6-74d1-4d56-9f5c-0f06cc769b2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:15:42.469138  614055 system_pods.go:89] "etcd-no-preload-985461" [bbbecc6d-9254-4d56-a422-b727f5fce084] Running
	I1013 23:15:42.469168  614055 system_pods.go:89] "kindnet-ljpdl" [374474f9-4eef-4142-b969-273938b503bf] Running
	I1013 23:15:42.469178  614055 system_pods.go:89] "kube-apiserver-no-preload-985461" [46ade205-4140-4846-afcb-8541a9dd00cd] Running
	I1013 23:15:42.469184  614055 system_pods.go:89] "kube-controller-manager-no-preload-985461" [b35785ac-5c01-4626-9f42-e6be92bad7fd] Running
	I1013 23:15:42.469188  614055 system_pods.go:89] "kube-proxy-24vhq" [186209cf-4abd-4f9d-925d-5ace9f59c705] Running
	I1013 23:15:42.469192  614055 system_pods.go:89] "kube-scheduler-no-preload-985461" [323f1904-f3e7-4509-9d7f-769877f7ab38] Running
	I1013 23:15:42.469204  614055 system_pods.go:89] "storage-provisioner" [dd5ba110-c29a-4fc8-b404-86e61c57b62f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:15:42.469218  614055 retry.go:31] will retry after 246.444371ms: missing components: kube-dns
	I1013 23:15:42.719895  614055 system_pods.go:86] 8 kube-system pods found
	I1013 23:15:42.719932  614055 system_pods.go:89] "coredns-66bc5c9577-qz7kw" [3f6fa1b6-74d1-4d56-9f5c-0f06cc769b2c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:15:42.719938  614055 system_pods.go:89] "etcd-no-preload-985461" [bbbecc6d-9254-4d56-a422-b727f5fce084] Running
	I1013 23:15:42.719945  614055 system_pods.go:89] "kindnet-ljpdl" [374474f9-4eef-4142-b969-273938b503bf] Running
	I1013 23:15:42.719949  614055 system_pods.go:89] "kube-apiserver-no-preload-985461" [46ade205-4140-4846-afcb-8541a9dd00cd] Running
	I1013 23:15:42.719955  614055 system_pods.go:89] "kube-controller-manager-no-preload-985461" [b35785ac-5c01-4626-9f42-e6be92bad7fd] Running
	I1013 23:15:42.719959  614055 system_pods.go:89] "kube-proxy-24vhq" [186209cf-4abd-4f9d-925d-5ace9f59c705] Running
	I1013 23:15:42.719963  614055 system_pods.go:89] "kube-scheduler-no-preload-985461" [323f1904-f3e7-4509-9d7f-769877f7ab38] Running
	I1013 23:15:42.719969  614055 system_pods.go:89] "storage-provisioner" [dd5ba110-c29a-4fc8-b404-86e61c57b62f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:15:42.719990  614055 retry.go:31] will retry after 311.905485ms: missing components: kube-dns
	I1013 23:15:43.036704  614055 system_pods.go:86] 8 kube-system pods found
	I1013 23:15:43.036737  614055 system_pods.go:89] "coredns-66bc5c9577-qz7kw" [3f6fa1b6-74d1-4d56-9f5c-0f06cc769b2c] Running
	I1013 23:15:43.036745  614055 system_pods.go:89] "etcd-no-preload-985461" [bbbecc6d-9254-4d56-a422-b727f5fce084] Running
	I1013 23:15:43.036752  614055 system_pods.go:89] "kindnet-ljpdl" [374474f9-4eef-4142-b969-273938b503bf] Running
	I1013 23:15:43.036781  614055 system_pods.go:89] "kube-apiserver-no-preload-985461" [46ade205-4140-4846-afcb-8541a9dd00cd] Running
	I1013 23:15:43.036795  614055 system_pods.go:89] "kube-controller-manager-no-preload-985461" [b35785ac-5c01-4626-9f42-e6be92bad7fd] Running
	I1013 23:15:43.036799  614055 system_pods.go:89] "kube-proxy-24vhq" [186209cf-4abd-4f9d-925d-5ace9f59c705] Running
	I1013 23:15:43.036804  614055 system_pods.go:89] "kube-scheduler-no-preload-985461" [323f1904-f3e7-4509-9d7f-769877f7ab38] Running
	I1013 23:15:43.036808  614055 system_pods.go:89] "storage-provisioner" [dd5ba110-c29a-4fc8-b404-86e61c57b62f] Running
	I1013 23:15:43.036817  614055 system_pods.go:126] duration metric: took 888.190191ms to wait for k8s-apps to be running ...
	I1013 23:15:43.036830  614055 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:15:43.036902  614055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:15:43.053306  614055 system_svc.go:56] duration metric: took 16.465243ms WaitForService to wait for kubelet
	I1013 23:15:43.053333  614055 kubeadm.go:586] duration metric: took 26.226379833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:15:43.053358  614055 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:15:43.056744  614055 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:15:43.056779  614055 node_conditions.go:123] node cpu capacity is 2
	I1013 23:15:43.056793  614055 node_conditions.go:105] duration metric: took 3.429264ms to run NodePressure ...
	I1013 23:15:43.056805  614055 start.go:241] waiting for startup goroutines ...
	I1013 23:15:43.056813  614055 start.go:246] waiting for cluster config update ...
	I1013 23:15:43.056823  614055 start.go:255] writing updated cluster config ...
	I1013 23:15:43.057118  614055 ssh_runner.go:195] Run: rm -f paused
	I1013 23:15:43.061812  614055 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:15:43.066069  614055 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qz7kw" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:43.071644  614055 pod_ready.go:94] pod "coredns-66bc5c9577-qz7kw" is "Ready"
	I1013 23:15:43.071672  614055 pod_ready.go:86] duration metric: took 5.577627ms for pod "coredns-66bc5c9577-qz7kw" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:43.074270  614055 pod_ready.go:83] waiting for pod "etcd-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:43.081194  614055 pod_ready.go:94] pod "etcd-no-preload-985461" is "Ready"
	I1013 23:15:43.081266  614055 pod_ready.go:86] duration metric: took 6.971133ms for pod "etcd-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:43.084169  614055 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:43.090061  614055 pod_ready.go:94] pod "kube-apiserver-no-preload-985461" is "Ready"
	I1013 23:15:43.090091  614055 pod_ready.go:86] duration metric: took 5.892344ms for pod "kube-apiserver-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:43.092864  614055 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:43.466664  614055 pod_ready.go:94] pod "kube-controller-manager-no-preload-985461" is "Ready"
	I1013 23:15:43.466695  614055 pod_ready.go:86] duration metric: took 373.803106ms for pod "kube-controller-manager-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:43.666993  614055 pod_ready.go:83] waiting for pod "kube-proxy-24vhq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:44.066342  614055 pod_ready.go:94] pod "kube-proxy-24vhq" is "Ready"
	I1013 23:15:44.066373  614055 pod_ready.go:86] duration metric: took 399.349741ms for pod "kube-proxy-24vhq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:44.266655  614055 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:44.666604  614055 pod_ready.go:94] pod "kube-scheduler-no-preload-985461" is "Ready"
	I1013 23:15:44.666636  614055 pod_ready.go:86] duration metric: took 399.952905ms for pod "kube-scheduler-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:15:44.666650  614055 pod_ready.go:40] duration metric: took 1.604802884s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:15:44.734685  614055 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:15:44.743135  614055 out.go:179] * Done! kubectl is now configured to use "no-preload-985461" cluster and "default" namespace by default
	W1013 23:15:44.753036  617881 node_ready.go:57] node "embed-certs-505482" has "Ready":"False" status (will retry)
	W1013 23:15:46.753499  617881 node_ready.go:57] node "embed-certs-505482" has "Ready":"False" status (will retry)
	W1013 23:15:49.252594  617881 node_ready.go:57] node "embed-certs-505482" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 13 23:15:42 no-preload-985461 crio[839]: time="2025-10-13T23:15:42.26811902Z" level=info msg="Created container bb28b392603ed8ec0d6121816c4d8d2466ca7b060025d86068019dce97174618: kube-system/coredns-66bc5c9577-qz7kw/coredns" id=255e4470-9470-4517-baf5-030f4d2aecd6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:15:42 no-preload-985461 crio[839]: time="2025-10-13T23:15:42.269542622Z" level=info msg="Starting container: bb28b392603ed8ec0d6121816c4d8d2466ca7b060025d86068019dce97174618" id=f4705183-9d1e-4b95-a1e7-b2e3b1adcb6a name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:15:42 no-preload-985461 crio[839]: time="2025-10-13T23:15:42.274165847Z" level=info msg="Started container" PID=2501 containerID=bb28b392603ed8ec0d6121816c4d8d2466ca7b060025d86068019dce97174618 description=kube-system/coredns-66bc5c9577-qz7kw/coredns id=f4705183-9d1e-4b95-a1e7-b2e3b1adcb6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2357d55d8cc36f38ac88c68a4febbd1fe01275825ef35263f63303fbbe10430b
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.254047292Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d0f6ddde-f523-4f06-9c1a-5dbcb39b8315 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.254133099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.268073314Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b6c1b51940aac1471cce9919814a86e14d78cc61ef99ee5b7a717dab097362f0 UID:9c064996-48ad-4fe6-af64-76040f212388 NetNS:/var/run/netns/f332e963-0e1e-4119-9286-7b31f6359d7d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001694318}] Aliases:map[]}"
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.268156742Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.289431142Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b6c1b51940aac1471cce9919814a86e14d78cc61ef99ee5b7a717dab097362f0 UID:9c064996-48ad-4fe6-af64-76040f212388 NetNS:/var/run/netns/f332e963-0e1e-4119-9286-7b31f6359d7d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001694318}] Aliases:map[]}"
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.289833029Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.293290477Z" level=info msg="Ran pod sandbox b6c1b51940aac1471cce9919814a86e14d78cc61ef99ee5b7a717dab097362f0 with infra container: default/busybox/POD" id=d0f6ddde-f523-4f06-9c1a-5dbcb39b8315 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.297489695Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8f73d4db-ff58-48c5-a8eb-f7aa440f5b6e name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.298008552Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8f73d4db-ff58-48c5-a8eb-f7aa440f5b6e name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.298228519Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8f73d4db-ff58-48c5-a8eb-f7aa440f5b6e name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.307455065Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8533c804-8630-47e2-bd68-552172941b59 name=/runtime.v1.ImageService/PullImage
	Oct 13 23:15:45 no-preload-985461 crio[839]: time="2025-10-13T23:15:45.311816692Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 23:15:47 no-preload-985461 crio[839]: time="2025-10-13T23:15:47.27000127Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=8533c804-8630-47e2-bd68-552172941b59 name=/runtime.v1.ImageService/PullImage
	Oct 13 23:15:47 no-preload-985461 crio[839]: time="2025-10-13T23:15:47.270660957Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5c52921a-2cb3-43a2-9130-a01a9d2b6d8e name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:15:47 no-preload-985461 crio[839]: time="2025-10-13T23:15:47.274209743Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=087c2d95-250e-4a2b-9b9c-174e7570d9d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:15:47 no-preload-985461 crio[839]: time="2025-10-13T23:15:47.280007706Z" level=info msg="Creating container: default/busybox/busybox" id=191cc421-e37b-440f-bcd5-0ff62b7567d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:15:47 no-preload-985461 crio[839]: time="2025-10-13T23:15:47.281172548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:15:47 no-preload-985461 crio[839]: time="2025-10-13T23:15:47.286235691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:15:47 no-preload-985461 crio[839]: time="2025-10-13T23:15:47.28673461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:15:47 no-preload-985461 crio[839]: time="2025-10-13T23:15:47.302065163Z" level=info msg="Created container 95ad7bb0adce1cffd73b03cd46ae84527ec36ef812c61df0075ac699cad6b78c: default/busybox/busybox" id=191cc421-e37b-440f-bcd5-0ff62b7567d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:15:47 no-preload-985461 crio[839]: time="2025-10-13T23:15:47.303058793Z" level=info msg="Starting container: 95ad7bb0adce1cffd73b03cd46ae84527ec36ef812c61df0075ac699cad6b78c" id=8fe6c721-639e-478d-84f8-1541327f9135 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:15:47 no-preload-985461 crio[839]: time="2025-10-13T23:15:47.307999419Z" level=info msg="Started container" PID=2557 containerID=95ad7bb0adce1cffd73b03cd46ae84527ec36ef812c61df0075ac699cad6b78c description=default/busybox/busybox id=8fe6c721-639e-478d-84f8-1541327f9135 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6c1b51940aac1471cce9919814a86e14d78cc61ef99ee5b7a717dab097362f0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	95ad7bb0adce1       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   b6c1b51940aac       busybox                                     default
	bb28b392603ed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   2357d55d8cc36       coredns-66bc5c9577-qz7kw                    kube-system
	1bb88e731aabe       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   18556554ba83b       storage-provisioner                         kube-system
	ad529e0b43964       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   d6ad6b3624e53       kindnet-ljpdl                               kube-system
	8b33cb1fb4828       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      36 seconds ago      Running             kube-proxy                0                   296abb28f23f9       kube-proxy-24vhq                            kube-system
	0c108cd99453d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      53 seconds ago      Running             kube-apiserver            0                   03165b2b477ff       kube-apiserver-no-preload-985461            kube-system
	9280dff9c0dec       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      53 seconds ago      Running             kube-controller-manager   0                   dd922f6dae5f3       kube-controller-manager-no-preload-985461   kube-system
	bf7681fd3f292       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      53 seconds ago      Running             kube-scheduler            0                   3ca0665d2a20f       kube-scheduler-no-preload-985461            kube-system
	219ac77dfef81       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      54 seconds ago      Running             etcd                      0                   f40324b0e1e90       etcd-no-preload-985461                      kube-system
	
	
	==> coredns [bb28b392603ed8ec0d6121816c4d8d2466ca7b060025d86068019dce97174618] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36869 - 55722 "HINFO IN 8277714131416649902.2611319361303017423. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023037778s
	
	
	==> describe nodes <==
	Name:               no-preload-985461
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-985461
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=no-preload-985461
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_15_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:15:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-985461
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:15:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:15:52 +0000   Mon, 13 Oct 2025 23:15:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:15:52 +0000   Mon, 13 Oct 2025 23:15:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:15:52 +0000   Mon, 13 Oct 2025 23:15:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:15:52 +0000   Mon, 13 Oct 2025 23:15:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-985461
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                c81637a3-d3d8-45df-8334-a3fb5c4d8e37
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-qz7kw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     37s
	  kube-system                 etcd-no-preload-985461                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         43s
	  kube-system                 kindnet-ljpdl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      37s
	  kube-system                 kube-apiserver-no-preload-985461             250m (12%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-no-preload-985461    200m (10%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-24vhq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-scheduler-no-preload-985461             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 36s                kube-proxy       
	  Normal   NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node no-preload-985461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node no-preload-985461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node no-preload-985461 status is now: NodeHasSufficientPID
	  Normal   Starting                 43s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 43s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  43s                kubelet          Node no-preload-985461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s                kubelet          Node no-preload-985461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s                kubelet          Node no-preload-985461 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           38s                node-controller  Node no-preload-985461 event: Registered Node no-preload-985461 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-985461 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct13 22:51] overlayfs: idmapped layers are currently not supported
	[Oct13 22:52] overlayfs: idmapped layers are currently not supported
	[Oct13 22:53] overlayfs: idmapped layers are currently not supported
	[Oct13 22:54] overlayfs: idmapped layers are currently not supported
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [219ac77dfef81310a4622e551fb16f5831ca15d8592c5b5613356e4d20b79d5c] <==
	{"level":"warn","ts":"2025-10-13T23:15:04.324686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.378266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.391648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.429333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.489481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.550619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.578253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.594229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.627993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.699465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.727233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.743882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.768009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.821622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.865666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.883667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.917940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.939819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.957182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:04.984293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:05.063204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:05.095552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:05.140179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:05.188029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:05.593102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56040","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:15:54 up  2:58,  0 user,  load average: 3.87, 3.26, 2.62
	Linux no-preload-985461 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ad529e0b439643b9af8e73de5423e263bedf531adb724c5240cdbe36e216f23d] <==
	I1013 23:15:31.105090       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:15:31.105495       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:15:31.105639       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:15:31.105660       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:15:31.105675       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:15:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:15:31.306518       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:15:31.306592       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:15:31.306629       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:15:31.307953       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 23:15:31.507298       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:15:31.507325       1 metrics.go:72] Registering metrics
	I1013 23:15:31.507383       1 controller.go:711] "Syncing nftables rules"
	I1013 23:15:41.313711       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:15:41.313767       1 main.go:301] handling current node
	I1013 23:15:51.308226       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:15:51.308263       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0c108cd99453d915c4b5798d91bc3bd32b91c104a589b41fd06ea340d567c7fb] <==
	I1013 23:15:07.911345       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 23:15:07.979189       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:15:07.979255       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 23:15:07.911367       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 23:15:08.030945       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:15:08.069159       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 23:15:08.073175       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:15:08.339605       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 23:15:08.367069       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 23:15:08.367275       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:15:09.986520       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:15:10.062362       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:15:10.205379       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 23:15:10.216045       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1013 23:15:10.217586       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:15:10.224178       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 23:15:11.025710       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:15:11.350075       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:15:11.391515       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 23:15:11.424736       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 23:15:16.945144       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 23:15:17.338648       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:15:17.468045       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:15:17.545406       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1013 23:15:53.090250       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:49512: use of closed network connection
	
	
	==> kube-controller-manager [9280dff9c0dec7a01838f6673f0e025e157856e21a34710c386d87b521a62ceb] <==
	I1013 23:15:16.043814       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 23:15:16.043990       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 23:15:16.054185       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 23:15:16.059994       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:15:16.062672       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 23:15:16.064102       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 23:15:16.064449       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 23:15:16.064563       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 23:15:16.065460       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 23:15:16.066042       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:15:16.069046       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 23:15:16.069217       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 23:15:16.069262       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 23:15:16.069324       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 23:15:16.070111       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:15:16.074729       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:15:16.088288       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 23:15:16.090855       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:15:16.094486       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:15:16.094525       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 23:15:16.113326       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:15:16.113353       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 23:15:16.113361       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 23:15:16.133325       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:15:46.018699       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8b33cb1fb4828de99c3989b3e0cdfa1224e6456545cf834e24f973d17609b43b] <==
	I1013 23:15:18.224511       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:15:18.332789       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:15:18.432883       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:15:18.432918       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 23:15:18.433003       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:15:18.537546       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:15:18.542421       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:15:18.552033       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:15:18.552310       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:15:18.552326       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:15:18.553713       1 config.go:200] "Starting service config controller"
	I1013 23:15:18.553723       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:15:18.553738       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:15:18.553742       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:15:18.553754       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:15:18.553758       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:15:18.554376       1 config.go:309] "Starting node config controller"
	I1013 23:15:18.554383       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:15:18.554389       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:15:18.655215       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:15:18.655254       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:15:18.655302       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bf7681fd3f2928ec5ef6e4a4e47251218c77abba467b297236d6a461ccc1eb47] <==
	I1013 23:15:04.664816       1 serving.go:386] Generated self-signed cert in-memory
	W1013 23:15:09.533067       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 23:15:09.533164       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 23:15:09.533207       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 23:15:09.533245       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 23:15:09.591909       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 23:15:09.592055       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:15:09.600070       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:15:09.600181       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1013 23:15:09.601928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1013 23:15:09.602574       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:15:09.607273       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 23:15:10.800811       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:15:16 no-preload-985461 kubelet[2006]: I1013 23:15:16.052453    2006 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 13 23:15:16 no-preload-985461 kubelet[2006]: I1013 23:15:16.053432    2006 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 13 23:15:17 no-preload-985461 kubelet[2006]: I1013 23:15:17.366107    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/186209cf-4abd-4f9d-925d-5ace9f59c705-kube-proxy\") pod \"kube-proxy-24vhq\" (UID: \"186209cf-4abd-4f9d-925d-5ace9f59c705\") " pod="kube-system/kube-proxy-24vhq"
	Oct 13 23:15:17 no-preload-985461 kubelet[2006]: I1013 23:15:17.366199    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6nhv\" (UniqueName: \"kubernetes.io/projected/186209cf-4abd-4f9d-925d-5ace9f59c705-kube-api-access-b6nhv\") pod \"kube-proxy-24vhq\" (UID: \"186209cf-4abd-4f9d-925d-5ace9f59c705\") " pod="kube-system/kube-proxy-24vhq"
	Oct 13 23:15:17 no-preload-985461 kubelet[2006]: I1013 23:15:17.366262    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/186209cf-4abd-4f9d-925d-5ace9f59c705-xtables-lock\") pod \"kube-proxy-24vhq\" (UID: \"186209cf-4abd-4f9d-925d-5ace9f59c705\") " pod="kube-system/kube-proxy-24vhq"
	Oct 13 23:15:17 no-preload-985461 kubelet[2006]: I1013 23:15:17.366283    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/186209cf-4abd-4f9d-925d-5ace9f59c705-lib-modules\") pod \"kube-proxy-24vhq\" (UID: \"186209cf-4abd-4f9d-925d-5ace9f59c705\") " pod="kube-system/kube-proxy-24vhq"
	Oct 13 23:15:17 no-preload-985461 kubelet[2006]: I1013 23:15:17.468525    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhk2d\" (UniqueName: \"kubernetes.io/projected/374474f9-4eef-4142-b969-273938b503bf-kube-api-access-xhk2d\") pod \"kindnet-ljpdl\" (UID: \"374474f9-4eef-4142-b969-273938b503bf\") " pod="kube-system/kindnet-ljpdl"
	Oct 13 23:15:17 no-preload-985461 kubelet[2006]: I1013 23:15:17.468619    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/374474f9-4eef-4142-b969-273938b503bf-cni-cfg\") pod \"kindnet-ljpdl\" (UID: \"374474f9-4eef-4142-b969-273938b503bf\") " pod="kube-system/kindnet-ljpdl"
	Oct 13 23:15:17 no-preload-985461 kubelet[2006]: I1013 23:15:17.468640    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/374474f9-4eef-4142-b969-273938b503bf-xtables-lock\") pod \"kindnet-ljpdl\" (UID: \"374474f9-4eef-4142-b969-273938b503bf\") " pod="kube-system/kindnet-ljpdl"
	Oct 13 23:15:17 no-preload-985461 kubelet[2006]: I1013 23:15:17.468658    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/374474f9-4eef-4142-b969-273938b503bf-lib-modules\") pod \"kindnet-ljpdl\" (UID: \"374474f9-4eef-4142-b969-273938b503bf\") " pod="kube-system/kindnet-ljpdl"
	Oct 13 23:15:17 no-preload-985461 kubelet[2006]: I1013 23:15:17.581272    2006 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 23:15:17 no-preload-985461 kubelet[2006]: W1013 23:15:17.916528    2006 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/crio-296abb28f23f9f88f90ff73198802ee46cff085cd57796abbce3be774e1814ad WatchSource:0}: Error finding container 296abb28f23f9f88f90ff73198802ee46cff085cd57796abbce3be774e1814ad: Status 404 returned error can't find the container with id 296abb28f23f9f88f90ff73198802ee46cff085cd57796abbce3be774e1814ad
	Oct 13 23:15:20 no-preload-985461 kubelet[2006]: I1013 23:15:20.493147    2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-24vhq" podStartSLOduration=4.493130092 podStartE2EDuration="4.493130092s" podCreationTimestamp="2025-10-13 23:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:15:18.758203491 +0000 UTC m=+7.504742933" watchObservedRunningTime="2025-10-13 23:15:20.493130092 +0000 UTC m=+9.239669543"
	Oct 13 23:15:31 no-preload-985461 kubelet[2006]: I1013 23:15:31.818723    2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ljpdl" podStartSLOduration=1.672610624 podStartE2EDuration="14.818706153s" podCreationTimestamp="2025-10-13 23:15:17 +0000 UTC" firstStartedPulling="2025-10-13 23:15:17.784327999 +0000 UTC m=+6.530867441" lastFinishedPulling="2025-10-13 23:15:30.930423529 +0000 UTC m=+19.676962970" observedRunningTime="2025-10-13 23:15:31.818478179 +0000 UTC m=+20.565017629" watchObservedRunningTime="2025-10-13 23:15:31.818706153 +0000 UTC m=+20.565245595"
	Oct 13 23:15:41 no-preload-985461 kubelet[2006]: I1013 23:15:41.755239    2006 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 23:15:41 no-preload-985461 kubelet[2006]: I1013 23:15:41.831576    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s4bf\" (UniqueName: \"kubernetes.io/projected/dd5ba110-c29a-4fc8-b404-86e61c57b62f-kube-api-access-9s4bf\") pod \"storage-provisioner\" (UID: \"dd5ba110-c29a-4fc8-b404-86e61c57b62f\") " pod="kube-system/storage-provisioner"
	Oct 13 23:15:41 no-preload-985461 kubelet[2006]: I1013 23:15:41.831626    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f6fa1b6-74d1-4d56-9f5c-0f06cc769b2c-config-volume\") pod \"coredns-66bc5c9577-qz7kw\" (UID: \"3f6fa1b6-74d1-4d56-9f5c-0f06cc769b2c\") " pod="kube-system/coredns-66bc5c9577-qz7kw"
	Oct 13 23:15:41 no-preload-985461 kubelet[2006]: I1013 23:15:41.831648    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz5j5\" (UniqueName: \"kubernetes.io/projected/3f6fa1b6-74d1-4d56-9f5c-0f06cc769b2c-kube-api-access-tz5j5\") pod \"coredns-66bc5c9577-qz7kw\" (UID: \"3f6fa1b6-74d1-4d56-9f5c-0f06cc769b2c\") " pod="kube-system/coredns-66bc5c9577-qz7kw"
	Oct 13 23:15:41 no-preload-985461 kubelet[2006]: I1013 23:15:41.831670    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dd5ba110-c29a-4fc8-b404-86e61c57b62f-tmp\") pod \"storage-provisioner\" (UID: \"dd5ba110-c29a-4fc8-b404-86e61c57b62f\") " pod="kube-system/storage-provisioner"
	Oct 13 23:15:42 no-preload-985461 kubelet[2006]: W1013 23:15:42.150773    2006 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/crio-18556554ba83b2811e5adc9733b5884753f9e16d172c1f32b1eb1a5a73983c1b WatchSource:0}: Error finding container 18556554ba83b2811e5adc9733b5884753f9e16d172c1f32b1eb1a5a73983c1b: Status 404 returned error can't find the container with id 18556554ba83b2811e5adc9733b5884753f9e16d172c1f32b1eb1a5a73983c1b
	Oct 13 23:15:42 no-preload-985461 kubelet[2006]: W1013 23:15:42.205800    2006 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/crio-2357d55d8cc36f38ac88c68a4febbd1fe01275825ef35263f63303fbbe10430b WatchSource:0}: Error finding container 2357d55d8cc36f38ac88c68a4febbd1fe01275825ef35263f63303fbbe10430b: Status 404 returned error can't find the container with id 2357d55d8cc36f38ac88c68a4febbd1fe01275825ef35263f63303fbbe10430b
	Oct 13 23:15:42 no-preload-985461 kubelet[2006]: I1013 23:15:42.869794    2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qz7kw" podStartSLOduration=25.869765148 podStartE2EDuration="25.869765148s" podCreationTimestamp="2025-10-13 23:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:15:42.832859868 +0000 UTC m=+31.579399318" watchObservedRunningTime="2025-10-13 23:15:42.869765148 +0000 UTC m=+31.616304598"
	Oct 13 23:15:42 no-preload-985461 kubelet[2006]: I1013 23:15:42.901277    2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=23.901248589 podStartE2EDuration="23.901248589s" podCreationTimestamp="2025-10-13 23:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:15:42.870986317 +0000 UTC m=+31.617525775" watchObservedRunningTime="2025-10-13 23:15:42.901248589 +0000 UTC m=+31.647788047"
	Oct 13 23:15:44 no-preload-985461 kubelet[2006]: I1013 23:15:44.948305    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmq9m\" (UniqueName: \"kubernetes.io/projected/9c064996-48ad-4fe6-af64-76040f212388-kube-api-access-hmq9m\") pod \"busybox\" (UID: \"9c064996-48ad-4fe6-af64-76040f212388\") " pod="default/busybox"
	Oct 13 23:15:47 no-preload-985461 kubelet[2006]: I1013 23:15:47.842846    2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.869843892 podStartE2EDuration="3.842828582s" podCreationTimestamp="2025-10-13 23:15:44 +0000 UTC" firstStartedPulling="2025-10-13 23:15:45.298671759 +0000 UTC m=+34.045211201" lastFinishedPulling="2025-10-13 23:15:47.271656441 +0000 UTC m=+36.018195891" observedRunningTime="2025-10-13 23:15:47.842539053 +0000 UTC m=+36.589078503" watchObservedRunningTime="2025-10-13 23:15:47.842828582 +0000 UTC m=+36.589368032"
	
	
	==> storage-provisioner [1bb88e731aabe646988eb7d17610278c4e7c18aa36f6f86f72efdbb7870c8149] <==
	I1013 23:15:42.252257       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 23:15:42.269137       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 23:15:42.269189       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 23:15:42.305848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:42.319477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:15:42.319772       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 23:15:42.320322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6dee9d5f-8952-4fb3-ad36-2f1171378517", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-985461_75f214b9-ffa1-41e6-a306-4672edbe85e4 became leader
	I1013 23:15:42.320499       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-985461_75f214b9-ffa1-41e6-a306-4672edbe85e4!
	W1013 23:15:42.338573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:42.351297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:15:42.420919       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-985461_75f214b9-ffa1-41e6-a306-4672edbe85e4!
	W1013 23:15:44.354782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:44.361939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:46.364788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:46.369284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:48.372950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:48.379950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:50.382976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:50.387676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:52.391233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:52.396991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:54.400272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:15:54.405137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-985461 -n no-preload-985461
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-985461 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (364.776422ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:16:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-505482 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-505482 describe deploy/metrics-server -n kube-system: exit status 1 (127.419439ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-505482 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-505482
helpers_test.go:243: (dbg) docker inspect embed-certs-505482:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b",
	        "Created": "2025-10-13T23:14:55.44592554Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 618318,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:14:55.538480204Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/hostname",
	        "HostsPath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/hosts",
	        "LogPath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b-json.log",
	        "Name": "/embed-certs-505482",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-505482:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-505482",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b",
	                "LowerDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-505482",
	                "Source": "/var/lib/docker/volumes/embed-certs-505482/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-505482",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-505482",
	                "name.minikube.sigs.k8s.io": "embed-certs-505482",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3c4859d77808dd17bc8a635048c1ed31e07999e706f9299b502399a8744b09af",
	            "SandboxKey": "/var/run/docker/netns/3c4859d77808",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-505482": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:ab:b2:71:44:18",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "23158782726c8cb4fc25349485432199b9ed3873182fa18e871d267e9c5dee9e",
	                    "EndpointID": "93b951ed534f325dac4eb339050ca0ecc92cab78251ad1f81d4fec348569d7f9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-505482",
	                        "a9accf0872e7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-505482 -n embed-certs-505482
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-505482 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-505482 logs -n 25: (1.662152763s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p kubernetes-upgrade-211312                                                                                                                                                                                                                  │ kubernetes-upgrade-211312 │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ delete  │ -p force-systemd-env-255188                                                                                                                                                                                                                   │ force-systemd-env-255188  │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:10 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-896873    │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p cert-options-051941 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:10 UTC │ 13 Oct 25 23:11 UTC │
	│ ssh     │ cert-options-051941 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ ssh     │ -p cert-options-051941 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ delete  │ -p cert-options-051941                                                                                                                                                                                                                        │ cert-options-051941       │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:12 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │                     │
	│ stop    │ -p old-k8s-version-670275 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │ 13 Oct 25 23:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ image   │ old-k8s-version-670275 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ pause   │ -p old-k8s-version-670275 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │                     │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461         │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:15 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-896873    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p cert-expiration-896873                                                                                                                                                                                                                     │ cert-expiration-896873    │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482        │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-985461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-985461         │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │                     │
	│ stop    │ -p no-preload-985461 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-985461         │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p no-preload-985461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-985461         │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461         │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-505482        │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:16:07
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:16:07.916708  621879 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:16:07.916873  621879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:16:07.916887  621879 out.go:374] Setting ErrFile to fd 2...
	I1013 23:16:07.916892  621879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:16:07.917147  621879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:16:07.917581  621879 out.go:368] Setting JSON to false
	I1013 23:16:07.918531  621879 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10704,"bootTime":1760386664,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:16:07.918605  621879 start.go:141] virtualization:  
	I1013 23:16:07.921889  621879 out.go:179] * [no-preload-985461] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:16:07.925865  621879 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:16:07.925908  621879 notify.go:220] Checking for updates...
	I1013 23:16:07.931919  621879 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:16:07.934863  621879 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:16:07.937798  621879 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:16:07.940644  621879 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:16:07.943774  621879 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:16:07.947301  621879 config.go:182] Loaded profile config "no-preload-985461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:16:07.947907  621879 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:16:07.980286  621879 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:16:07.980402  621879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:16:08.046320  621879 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:16:08.033201781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:16:08.046429  621879 docker.go:318] overlay module found
	I1013 23:16:08.049687  621879 out.go:179] * Using the docker driver based on existing profile
	I1013 23:16:08.052756  621879 start.go:305] selected driver: docker
	I1013 23:16:08.052801  621879 start.go:925] validating driver "docker" against &{Name:no-preload-985461 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-985461 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:16:08.053019  621879 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:16:08.053872  621879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:16:08.129105  621879 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:16:08.119674678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:16:08.129448  621879 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:16:08.129483  621879 cni.go:84] Creating CNI manager for ""
	I1013 23:16:08.129542  621879 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:16:08.129588  621879 start.go:349] cluster config:
	{Name:no-preload-985461 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-985461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:16:08.132890  621879 out.go:179] * Starting "no-preload-985461" primary control-plane node in "no-preload-985461" cluster
	I1013 23:16:08.135650  621879 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:16:08.138563  621879 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:16:08.141480  621879 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:16:08.141568  621879 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:16:08.141697  621879 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/config.json ...
	I1013 23:16:08.142024  621879 cache.go:107] acquiring lock: {Name:mk04bdc697e7da36625a7fad2d5a71a51b62c26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:16:08.142111  621879 cache.go:115] /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1013 23:16:08.142125  621879 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 115.296µs
	I1013 23:16:08.142134  621879 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1013 23:16:08.142321  621879 cache.go:107] acquiring lock: {Name:mk76c49b843c5728d585f847bff09a833eb53a6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:16:08.142386  621879 cache.go:115] /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1013 23:16:08.142399  621879 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 83.633µs
	I1013 23:16:08.142407  621879 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1013 23:16:08.142423  621879 cache.go:107] acquiring lock: {Name:mk96502b2c62553d1cb29cdc6f4791396a808456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:16:08.142460  621879 cache.go:115] /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1013 23:16:08.142471  621879 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 48.631µs
	I1013 23:16:08.142479  621879 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1013 23:16:08.142489  621879 cache.go:107] acquiring lock: {Name:mka891e5622f5c8196dc0f7853d33e476a346451 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:16:08.142522  621879 cache.go:115] /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1013 23:16:08.142534  621879 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 44.857µs
	I1013 23:16:08.142546  621879 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1013 23:16:08.142557  621879 cache.go:107] acquiring lock: {Name:mk6a2f57fd36b54094b01c4d32380b3a47cb3f1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:16:08.142588  621879 cache.go:115] /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1013 23:16:08.142596  621879 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 40.451µs
	I1013 23:16:08.142603  621879 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1013 23:16:08.142613  621879 cache.go:107] acquiring lock: {Name:mk3d91564a059e45676aaa25a1097d4a14637504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:16:08.142645  621879 cache.go:115] /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1013 23:16:08.142654  621879 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 41.697µs
	I1013 23:16:08.142660  621879 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1013 23:16:08.142669  621879 cache.go:107] acquiring lock: {Name:mk5243c819eb7436505e17612b5f7d9250d837ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:16:08.142706  621879 cache.go:115] /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1013 23:16:08.142716  621879 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 47.852µs
	I1013 23:16:08.142722  621879 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1013 23:16:08.142743  621879 cache.go:107] acquiring lock: {Name:mkc86ec0203c85a07a89a7bb7a039473c6ae818e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:16:08.142819  621879 cache.go:115] /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1013 23:16:08.142843  621879 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 91.059µs
	I1013 23:16:08.142863  621879 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21724-428797/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1013 23:16:08.142871  621879 cache.go:87] Successfully saved all images to host disk.
	I1013 23:16:08.162588  621879 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:16:08.162615  621879 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:16:08.162634  621879 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:16:08.162658  621879 start.go:360] acquireMachinesLock for no-preload-985461: {Name:mk18da7f2fdedb9e00b48cb3505751f2e4b7e894 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:16:08.162718  621879 start.go:364] duration metric: took 38.859µs to acquireMachinesLock for "no-preload-985461"
	I1013 23:16:08.162742  621879 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:16:08.162757  621879 fix.go:54] fixHost starting: 
	I1013 23:16:08.163043  621879 cli_runner.go:164] Run: docker container inspect no-preload-985461 --format={{.State.Status}}
	I1013 23:16:08.179911  621879 fix.go:112] recreateIfNeeded on no-preload-985461: state=Stopped err=<nil>
	W1013 23:16:08.179945  621879 fix.go:138] unexpected machine state, will restart: <nil>
	W1013 23:16:05.252562  617881 node_ready.go:57] node "embed-certs-505482" has "Ready":"False" status (will retry)
	W1013 23:16:07.753865  617881 node_ready.go:57] node "embed-certs-505482" has "Ready":"False" status (will retry)
	I1013 23:16:08.183685  621879 out.go:252] * Restarting existing docker container for "no-preload-985461" ...
	I1013 23:16:08.183778  621879 cli_runner.go:164] Run: docker start no-preload-985461
	I1013 23:16:08.459600  621879 cli_runner.go:164] Run: docker container inspect no-preload-985461 --format={{.State.Status}}
	I1013 23:16:08.487496  621879 kic.go:430] container "no-preload-985461" state is running.
	I1013 23:16:08.487893  621879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-985461
	I1013 23:16:08.516203  621879 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/config.json ...
	I1013 23:16:08.516448  621879 machine.go:93] provisionDockerMachine start ...
	I1013 23:16:08.516531  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:08.540437  621879 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:08.540776  621879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1013 23:16:08.540785  621879 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:16:08.541555  621879 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 23:16:11.690808  621879 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-985461
	
	I1013 23:16:11.690833  621879 ubuntu.go:182] provisioning hostname "no-preload-985461"
	I1013 23:16:11.690901  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:11.709322  621879 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:11.709629  621879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1013 23:16:11.709647  621879 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-985461 && echo "no-preload-985461" | sudo tee /etc/hostname
	I1013 23:16:11.870801  621879 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-985461
	
	I1013 23:16:11.870912  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:11.888980  621879 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:11.889296  621879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1013 23:16:11.889314  621879 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-985461' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-985461/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-985461' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:16:12.039601  621879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:16:12.039626  621879 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:16:12.039714  621879 ubuntu.go:190] setting up certificates
	I1013 23:16:12.039725  621879 provision.go:84] configureAuth start
	I1013 23:16:12.039823  621879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-985461
	I1013 23:16:12.058866  621879 provision.go:143] copyHostCerts
	I1013 23:16:12.058992  621879 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:16:12.059032  621879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:16:12.059136  621879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:16:12.059246  621879 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:16:12.059259  621879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:16:12.059289  621879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:16:12.059348  621879 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:16:12.059357  621879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:16:12.059382  621879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:16:12.059436  621879 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.no-preload-985461 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-985461]
	I1013 23:16:12.424844  621879 provision.go:177] copyRemoteCerts
	I1013 23:16:12.424951  621879 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:16:12.425011  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:12.443383  621879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/no-preload-985461/id_rsa Username:docker}
	I1013 23:16:12.547142  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:16:12.565382  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 23:16:12.585313  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:16:12.603233  621879 provision.go:87] duration metric: took 563.475117ms to configureAuth
	I1013 23:16:12.603258  621879 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:16:12.603444  621879 config.go:182] Loaded profile config "no-preload-985461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:16:12.603553  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:12.623248  621879 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:12.623565  621879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1013 23:16:12.623581  621879 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1013 23:16:10.253548  617881 node_ready.go:57] node "embed-certs-505482" has "Ready":"False" status (will retry)
	W1013 23:16:12.753838  617881 node_ready.go:57] node "embed-certs-505482" has "Ready":"False" status (will retry)
	I1013 23:16:14.254288  617881 node_ready.go:49] node "embed-certs-505482" is "Ready"
	I1013 23:16:14.254330  617881 node_ready.go:38] duration metric: took 40.504578103s for node "embed-certs-505482" to be "Ready" ...
	I1013 23:16:14.254349  617881 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:16:14.254437  617881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:16:14.269561  617881 api_server.go:72] duration metric: took 41.61508084s to wait for apiserver process to appear ...
	I1013 23:16:14.269582  617881 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:16:14.269601  617881 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:16:12.974595  621879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:16:12.974676  621879 machine.go:96] duration metric: took 4.458201048s to provisionDockerMachine
	I1013 23:16:12.974701  621879 start.go:293] postStartSetup for "no-preload-985461" (driver="docker")
	I1013 23:16:12.974736  621879 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:16:12.974831  621879 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:16:12.974897  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:12.998405  621879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/no-preload-985461/id_rsa Username:docker}
	I1013 23:16:13.103882  621879 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:16:13.107852  621879 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:16:13.107884  621879 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:16:13.107896  621879 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:16:13.107953  621879 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:16:13.108040  621879 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:16:13.108146  621879 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:16:13.116196  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:16:13.134525  621879 start.go:296] duration metric: took 159.785454ms for postStartSetup
	I1013 23:16:13.134623  621879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:16:13.134679  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:13.152404  621879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/no-preload-985461/id_rsa Username:docker}
	I1013 23:16:13.257326  621879 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:16:13.262444  621879 fix.go:56] duration metric: took 5.099688832s for fixHost
	I1013 23:16:13.262469  621879 start.go:83] releasing machines lock for "no-preload-985461", held for 5.099738718s
	I1013 23:16:13.262535  621879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-985461
	I1013 23:16:13.279711  621879 ssh_runner.go:195] Run: cat /version.json
	I1013 23:16:13.279768  621879 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:16:13.279830  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:13.279770  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:13.302253  621879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/no-preload-985461/id_rsa Username:docker}
	I1013 23:16:13.305540  621879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/no-preload-985461/id_rsa Username:docker}
	I1013 23:16:13.491734  621879 ssh_runner.go:195] Run: systemctl --version
	I1013 23:16:13.498324  621879 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:16:13.534632  621879 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:16:13.539017  621879 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:16:13.539148  621879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:16:13.547034  621879 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:16:13.547057  621879 start.go:495] detecting cgroup driver to use...
	I1013 23:16:13.547203  621879 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:16:13.547294  621879 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:16:13.562838  621879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:16:13.576341  621879 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:16:13.576478  621879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:16:13.592384  621879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:16:13.605658  621879 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:16:13.726486  621879 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:16:13.857598  621879 docker.go:234] disabling docker service ...
	I1013 23:16:13.857711  621879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:16:13.873780  621879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:16:13.888820  621879 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:16:14.019479  621879 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:16:14.173974  621879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:16:14.192659  621879 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:16:14.214358  621879 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:16:14.214449  621879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:14.228156  621879 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:16:14.228252  621879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:14.242492  621879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:14.254631  621879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:14.268022  621879 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:16:14.279855  621879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:14.295899  621879 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:14.307307  621879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:14.316862  621879 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:16:14.326240  621879 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:16:14.334353  621879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:16:14.455896  621879 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:16:14.637220  621879 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:16:14.637363  621879 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:16:14.642049  621879 start.go:563] Will wait 60s for crictl version
	I1013 23:16:14.642175  621879 ssh_runner.go:195] Run: which crictl
	I1013 23:16:14.649476  621879 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:16:14.691523  621879 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:16:14.691684  621879 ssh_runner.go:195] Run: crio --version
	I1013 23:16:14.734412  621879 ssh_runner.go:195] Run: crio --version
	I1013 23:16:14.777786  621879 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:16:14.780928  621879 cli_runner.go:164] Run: docker network inspect no-preload-985461 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:16:14.810748  621879 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 23:16:14.815047  621879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:16:14.826137  621879 kubeadm.go:883] updating cluster {Name:no-preload-985461 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-985461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:16:14.826271  621879 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:16:14.826332  621879 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:16:14.866853  621879 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:16:14.866883  621879 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:16:14.866892  621879 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 23:16:14.866985  621879 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-985461 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-985461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:16:14.867071  621879 ssh_runner.go:195] Run: crio config
	I1013 23:16:14.926329  621879 cni.go:84] Creating CNI manager for ""
	I1013 23:16:14.926354  621879 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:16:14.926378  621879 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:16:14.926402  621879 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-985461 NodeName:no-preload-985461 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:16:14.926537  621879 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-985461"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:16:14.926608  621879 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:16:14.936015  621879 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:16:14.936096  621879 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:16:14.946132  621879 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 23:16:14.958912  621879 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:16:14.979162  621879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1013 23:16:14.996001  621879 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:16:15.000138  621879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:16:15.034576  621879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:16:15.159165  621879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:16:15.174930  621879 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461 for IP: 192.168.85.2
	I1013 23:16:15.174949  621879 certs.go:195] generating shared ca certs ...
	I1013 23:16:15.174966  621879 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:15.175239  621879 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:16:15.175322  621879 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:16:15.175337  621879 certs.go:257] generating profile certs ...
	I1013 23:16:15.175476  621879 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.key
	I1013 23:16:15.175546  621879 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/apiserver.key.fd6ece16
	I1013 23:16:15.175584  621879 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/proxy-client.key
	I1013 23:16:15.175701  621879 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:16:15.175729  621879 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:16:15.175738  621879 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:16:15.175761  621879 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:16:15.175783  621879 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:16:15.175805  621879 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:16:15.175850  621879 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:16:15.176476  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:16:15.205790  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:16:15.229798  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:16:15.258971  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:16:15.292671  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 23:16:15.310325  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 23:16:15.331247  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:16:15.362076  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 23:16:15.394465  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:16:15.422101  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:16:15.445203  621879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:16:15.468563  621879 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:16:15.483226  621879 ssh_runner.go:195] Run: openssl version
	I1013 23:16:15.491874  621879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:16:15.501903  621879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:16:15.505874  621879 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:16:15.506033  621879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:16:15.548391  621879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:16:15.558139  621879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:16:15.567020  621879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:16:15.571171  621879 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:16:15.571235  621879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:16:15.614035  621879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:16:15.621846  621879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:16:15.629998  621879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:16:15.634005  621879 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:16:15.634070  621879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:16:15.682586  621879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:16:15.692545  621879 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:16:15.697242  621879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:16:15.741467  621879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:16:15.783297  621879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:16:15.824227  621879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:16:15.871178  621879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:16:15.915835  621879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:16:15.980893  621879 kubeadm.go:400] StartCluster: {Name:no-preload-985461 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-985461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:16:15.981054  621879 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:16:15.981156  621879 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:16:16.049092  621879 cri.go:89] found id: ""
	I1013 23:16:16.049212  621879 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:16:16.061859  621879 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:16:16.061880  621879 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:16:16.061980  621879 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:16:16.077890  621879 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:16:16.078870  621879 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-985461" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:16:16.079471  621879 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-428797/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-985461" cluster setting kubeconfig missing "no-preload-985461" context setting]
	I1013 23:16:16.080380  621879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:16.082704  621879 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:16:16.115282  621879 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 23:16:16.115316  621879 kubeadm.go:601] duration metric: took 53.430602ms to restartPrimaryControlPlane
	I1013 23:16:16.115325  621879 kubeadm.go:402] duration metric: took 134.443795ms to StartCluster
	I1013 23:16:16.115340  621879 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:16.115403  621879 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:16:16.116957  621879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:16.117180  621879 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:16:16.117915  621879 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:16:16.117989  621879 addons.go:69] Setting storage-provisioner=true in profile "no-preload-985461"
	I1013 23:16:16.118002  621879 addons.go:238] Setting addon storage-provisioner=true in "no-preload-985461"
	W1013 23:16:16.118011  621879 addons.go:247] addon storage-provisioner should already be in state true
	I1013 23:16:16.118033  621879 host.go:66] Checking if "no-preload-985461" exists ...
	I1013 23:16:16.118609  621879 cli_runner.go:164] Run: docker container inspect no-preload-985461 --format={{.State.Status}}
	I1013 23:16:16.118834  621879 config.go:182] Loaded profile config "no-preload-985461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:16:16.118917  621879 addons.go:69] Setting dashboard=true in profile "no-preload-985461"
	I1013 23:16:16.118978  621879 addons.go:238] Setting addon dashboard=true in "no-preload-985461"
	W1013 23:16:16.118988  621879 addons.go:247] addon dashboard should already be in state true
	I1013 23:16:16.119038  621879 host.go:66] Checking if "no-preload-985461" exists ...
	I1013 23:16:16.119532  621879 cli_runner.go:164] Run: docker container inspect no-preload-985461 --format={{.State.Status}}
	I1013 23:16:16.121774  621879 addons.go:69] Setting default-storageclass=true in profile "no-preload-985461"
	I1013 23:16:16.121945  621879 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-985461"
	I1013 23:16:16.122280  621879 cli_runner.go:164] Run: docker container inspect no-preload-985461 --format={{.State.Status}}
	I1013 23:16:16.135117  621879 out.go:179] * Verifying Kubernetes components...
	I1013 23:16:16.141711  621879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:16:16.177436  621879 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 23:16:16.184642  621879 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 23:16:16.184760  621879 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:16:14.282160  617881 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 23:16:14.283598  617881 api_server.go:141] control plane version: v1.34.1
	I1013 23:16:14.283620  617881 api_server.go:131] duration metric: took 14.030798ms to wait for apiserver health ...
	I1013 23:16:14.283629  617881 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:16:14.291024  617881 system_pods.go:59] 8 kube-system pods found
	I1013 23:16:14.291153  617881 system_pods.go:61] "coredns-66bc5c9577-6rtz5" [1a2091eb-00b5-46b1-8f85-225c56508322] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:16:14.291178  617881 system_pods.go:61] "etcd-embed-certs-505482" [4620d6b3-7695-45a4-88f6-9db5af3fa1a5] Running
	I1013 23:16:14.291213  617881 system_pods.go:61] "kindnet-zf5h8" [7567865c-bc2d-41f2-9515-bf3a0c1d5f61] Running
	I1013 23:16:14.291240  617881 system_pods.go:61] "kube-apiserver-embed-certs-505482" [8fb6166d-fbdc-4c34-a991-aa6cd95a5c29] Running
	I1013 23:16:14.291261  617881 system_pods.go:61] "kube-controller-manager-embed-certs-505482" [efb3e995-198d-4431-b436-c6b12435318d] Running
	I1013 23:16:14.291282  617881 system_pods.go:61] "kube-proxy-n2g5d" [efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d] Running
	I1013 23:16:14.291301  617881 system_pods.go:61] "kube-scheduler-embed-certs-505482" [41e6ad1a-48c8-43af-88b0-e9bae19f3cd6] Running
	I1013 23:16:14.291333  617881 system_pods.go:61] "storage-provisioner" [7c85e3a2-d20e-48ef-84ef-980fe6e2d40e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:16:14.291359  617881 system_pods.go:74] duration metric: took 7.724094ms to wait for pod list to return data ...
	I1013 23:16:14.291381  617881 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:16:14.294538  617881 default_sa.go:45] found service account: "default"
	I1013 23:16:14.294560  617881 default_sa.go:55] duration metric: took 3.158016ms for default service account to be created ...
	I1013 23:16:14.294570  617881 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:16:14.300894  617881 system_pods.go:86] 8 kube-system pods found
	I1013 23:16:14.300967  617881 system_pods.go:89] "coredns-66bc5c9577-6rtz5" [1a2091eb-00b5-46b1-8f85-225c56508322] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:16:14.300991  617881 system_pods.go:89] "etcd-embed-certs-505482" [4620d6b3-7695-45a4-88f6-9db5af3fa1a5] Running
	I1013 23:16:14.301011  617881 system_pods.go:89] "kindnet-zf5h8" [7567865c-bc2d-41f2-9515-bf3a0c1d5f61] Running
	I1013 23:16:14.301044  617881 system_pods.go:89] "kube-apiserver-embed-certs-505482" [8fb6166d-fbdc-4c34-a991-aa6cd95a5c29] Running
	I1013 23:16:14.301071  617881 system_pods.go:89] "kube-controller-manager-embed-certs-505482" [efb3e995-198d-4431-b436-c6b12435318d] Running
	I1013 23:16:14.301092  617881 system_pods.go:89] "kube-proxy-n2g5d" [efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d] Running
	I1013 23:16:14.301111  617881 system_pods.go:89] "kube-scheduler-embed-certs-505482" [41e6ad1a-48c8-43af-88b0-e9bae19f3cd6] Running
	I1013 23:16:14.301133  617881 system_pods.go:89] "storage-provisioner" [7c85e3a2-d20e-48ef-84ef-980fe6e2d40e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:16:14.301186  617881 retry.go:31] will retry after 227.743686ms: missing components: kube-dns
	I1013 23:16:14.538996  617881 system_pods.go:86] 8 kube-system pods found
	I1013 23:16:14.539041  617881 system_pods.go:89] "coredns-66bc5c9577-6rtz5" [1a2091eb-00b5-46b1-8f85-225c56508322] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:16:14.539073  617881 system_pods.go:89] "etcd-embed-certs-505482" [4620d6b3-7695-45a4-88f6-9db5af3fa1a5] Running
	I1013 23:16:14.539116  617881 system_pods.go:89] "kindnet-zf5h8" [7567865c-bc2d-41f2-9515-bf3a0c1d5f61] Running
	I1013 23:16:14.539140  617881 system_pods.go:89] "kube-apiserver-embed-certs-505482" [8fb6166d-fbdc-4c34-a991-aa6cd95a5c29] Running
	I1013 23:16:14.539157  617881 system_pods.go:89] "kube-controller-manager-embed-certs-505482" [efb3e995-198d-4431-b436-c6b12435318d] Running
	I1013 23:16:14.539167  617881 system_pods.go:89] "kube-proxy-n2g5d" [efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d] Running
	I1013 23:16:14.539175  617881 system_pods.go:89] "kube-scheduler-embed-certs-505482" [41e6ad1a-48c8-43af-88b0-e9bae19f3cd6] Running
	I1013 23:16:14.539188  617881 system_pods.go:89] "storage-provisioner" [7c85e3a2-d20e-48ef-84ef-980fe6e2d40e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:16:14.539230  617881 retry.go:31] will retry after 247.325624ms: missing components: kube-dns
	I1013 23:16:14.794044  617881 system_pods.go:86] 8 kube-system pods found
	I1013 23:16:14.794087  617881 system_pods.go:89] "coredns-66bc5c9577-6rtz5" [1a2091eb-00b5-46b1-8f85-225c56508322] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:16:14.794094  617881 system_pods.go:89] "etcd-embed-certs-505482" [4620d6b3-7695-45a4-88f6-9db5af3fa1a5] Running
	I1013 23:16:14.794101  617881 system_pods.go:89] "kindnet-zf5h8" [7567865c-bc2d-41f2-9515-bf3a0c1d5f61] Running
	I1013 23:16:14.794106  617881 system_pods.go:89] "kube-apiserver-embed-certs-505482" [8fb6166d-fbdc-4c34-a991-aa6cd95a5c29] Running
	I1013 23:16:14.794112  617881 system_pods.go:89] "kube-controller-manager-embed-certs-505482" [efb3e995-198d-4431-b436-c6b12435318d] Running
	I1013 23:16:14.794116  617881 system_pods.go:89] "kube-proxy-n2g5d" [efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d] Running
	I1013 23:16:14.794126  617881 system_pods.go:89] "kube-scheduler-embed-certs-505482" [41e6ad1a-48c8-43af-88b0-e9bae19f3cd6] Running
	I1013 23:16:14.794138  617881 system_pods.go:89] "storage-provisioner" [7c85e3a2-d20e-48ef-84ef-980fe6e2d40e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:16:14.794159  617881 retry.go:31] will retry after 376.517525ms: missing components: kube-dns
	I1013 23:16:15.175747  617881 system_pods.go:86] 8 kube-system pods found
	I1013 23:16:15.175770  617881 system_pods.go:89] "coredns-66bc5c9577-6rtz5" [1a2091eb-00b5-46b1-8f85-225c56508322] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:16:15.175776  617881 system_pods.go:89] "etcd-embed-certs-505482" [4620d6b3-7695-45a4-88f6-9db5af3fa1a5] Running
	I1013 23:16:15.175782  617881 system_pods.go:89] "kindnet-zf5h8" [7567865c-bc2d-41f2-9515-bf3a0c1d5f61] Running
	I1013 23:16:15.175788  617881 system_pods.go:89] "kube-apiserver-embed-certs-505482" [8fb6166d-fbdc-4c34-a991-aa6cd95a5c29] Running
	I1013 23:16:15.175794  617881 system_pods.go:89] "kube-controller-manager-embed-certs-505482" [efb3e995-198d-4431-b436-c6b12435318d] Running
	I1013 23:16:15.175798  617881 system_pods.go:89] "kube-proxy-n2g5d" [efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d] Running
	I1013 23:16:15.175802  617881 system_pods.go:89] "kube-scheduler-embed-certs-505482" [41e6ad1a-48c8-43af-88b0-e9bae19f3cd6] Running
	I1013 23:16:15.175808  617881 system_pods.go:89] "storage-provisioner" [7c85e3a2-d20e-48ef-84ef-980fe6e2d40e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:16:15.175822  617881 retry.go:31] will retry after 462.509918ms: missing components: kube-dns
	I1013 23:16:15.641828  617881 system_pods.go:86] 8 kube-system pods found
	I1013 23:16:15.641858  617881 system_pods.go:89] "coredns-66bc5c9577-6rtz5" [1a2091eb-00b5-46b1-8f85-225c56508322] Running
	I1013 23:16:15.641865  617881 system_pods.go:89] "etcd-embed-certs-505482" [4620d6b3-7695-45a4-88f6-9db5af3fa1a5] Running
	I1013 23:16:15.641870  617881 system_pods.go:89] "kindnet-zf5h8" [7567865c-bc2d-41f2-9515-bf3a0c1d5f61] Running
	I1013 23:16:15.641874  617881 system_pods.go:89] "kube-apiserver-embed-certs-505482" [8fb6166d-fbdc-4c34-a991-aa6cd95a5c29] Running
	I1013 23:16:15.641879  617881 system_pods.go:89] "kube-controller-manager-embed-certs-505482" [efb3e995-198d-4431-b436-c6b12435318d] Running
	I1013 23:16:15.641884  617881 system_pods.go:89] "kube-proxy-n2g5d" [efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d] Running
	I1013 23:16:15.641888  617881 system_pods.go:89] "kube-scheduler-embed-certs-505482" [41e6ad1a-48c8-43af-88b0-e9bae19f3cd6] Running
	I1013 23:16:15.641892  617881 system_pods.go:89] "storage-provisioner" [7c85e3a2-d20e-48ef-84ef-980fe6e2d40e] Running
	I1013 23:16:15.641899  617881 system_pods.go:126] duration metric: took 1.34732451s to wait for k8s-apps to be running ...
	I1013 23:16:15.641906  617881 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:16:15.641957  617881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:16:15.660119  617881 system_svc.go:56] duration metric: took 18.201849ms WaitForService to wait for kubelet
	I1013 23:16:15.660145  617881 kubeadm.go:586] duration metric: took 43.005668616s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:16:15.660162  617881 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:16:15.664040  617881 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:16:15.664070  617881 node_conditions.go:123] node cpu capacity is 2
	I1013 23:16:15.664083  617881 node_conditions.go:105] duration metric: took 3.915392ms to run NodePressure ...
	I1013 23:16:15.664094  617881 start.go:241] waiting for startup goroutines ...
	I1013 23:16:15.664102  617881 start.go:246] waiting for cluster config update ...
	I1013 23:16:15.664114  617881 start.go:255] writing updated cluster config ...
	I1013 23:16:15.664529  617881 ssh_runner.go:195] Run: rm -f paused
	I1013 23:16:15.668939  617881 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:16:15.673420  617881 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6rtz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:15.681793  617881 pod_ready.go:94] pod "coredns-66bc5c9577-6rtz5" is "Ready"
	I1013 23:16:15.681875  617881 pod_ready.go:86] duration metric: took 8.429877ms for pod "coredns-66bc5c9577-6rtz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:15.685883  617881 pod_ready.go:83] waiting for pod "etcd-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:15.692947  617881 pod_ready.go:94] pod "etcd-embed-certs-505482" is "Ready"
	I1013 23:16:15.693013  617881 pod_ready.go:86] duration metric: took 7.060272ms for pod "etcd-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:15.696347  617881 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:15.703552  617881 pod_ready.go:94] pod "kube-apiserver-embed-certs-505482" is "Ready"
	I1013 23:16:15.703629  617881 pod_ready.go:86] duration metric: took 7.207617ms for pod "kube-apiserver-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:15.706782  617881 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:16.073662  617881 pod_ready.go:94] pod "kube-controller-manager-embed-certs-505482" is "Ready"
	I1013 23:16:16.073690  617881 pod_ready.go:86] duration metric: took 366.83422ms for pod "kube-controller-manager-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:16.279297  617881 pod_ready.go:83] waiting for pod "kube-proxy-n2g5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:16.674077  617881 pod_ready.go:94] pod "kube-proxy-n2g5d" is "Ready"
	I1013 23:16:16.674101  617881 pod_ready.go:86] duration metric: took 394.776204ms for pod "kube-proxy-n2g5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:16.874960  617881 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:17.274779  617881 pod_ready.go:94] pod "kube-scheduler-embed-certs-505482" is "Ready"
	I1013 23:16:17.274805  617881 pod_ready.go:86] duration metric: took 399.820854ms for pod "kube-scheduler-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:17.274829  617881 pod_ready.go:40] duration metric: took 1.605859887s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:16:17.363502  617881 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:16:17.366989  617881 out.go:179] * Done! kubectl is now configured to use "embed-certs-505482" cluster and "default" namespace by default
	I1013 23:16:16.191104  621879 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:16:16.191125  621879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:16:16.191190  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:16.191374  621879 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 23:16:16.191385  621879 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 23:16:16.191425  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:16.193705  621879 addons.go:238] Setting addon default-storageclass=true in "no-preload-985461"
	W1013 23:16:16.193745  621879 addons.go:247] addon default-storageclass should already be in state true
	I1013 23:16:16.193772  621879 host.go:66] Checking if "no-preload-985461" exists ...
	I1013 23:16:16.194217  621879 cli_runner.go:164] Run: docker container inspect no-preload-985461 --format={{.State.Status}}
	I1013 23:16:16.226795  621879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/no-preload-985461/id_rsa Username:docker}
	I1013 23:16:16.254167  621879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/no-preload-985461/id_rsa Username:docker}
	I1013 23:16:16.256265  621879 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:16:16.256289  621879 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:16:16.256358  621879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:16:16.293674  621879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/no-preload-985461/id_rsa Username:docker}
	I1013 23:16:16.510600  621879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:16:16.534632  621879 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 23:16:16.534671  621879 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 23:16:16.545283  621879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:16:16.575209  621879 node_ready.go:35] waiting up to 6m0s for node "no-preload-985461" to be "Ready" ...
	I1013 23:16:16.621462  621879 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 23:16:16.621486  621879 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 23:16:16.628071  621879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:16:16.669227  621879 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 23:16:16.669248  621879 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 23:16:16.728752  621879 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 23:16:16.728774  621879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 23:16:16.785290  621879 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 23:16:16.785363  621879 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 23:16:16.816930  621879 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 23:16:16.817022  621879 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 23:16:16.870925  621879 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 23:16:16.870951  621879 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 23:16:16.920599  621879 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 23:16:16.920624  621879 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 23:16:16.953161  621879 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:16:16.953187  621879 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 23:16:16.988650  621879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:16:21.444976  621879 node_ready.go:49] node "no-preload-985461" is "Ready"
	I1013 23:16:21.445003  621879 node_ready.go:38] duration metric: took 4.869741782s for node "no-preload-985461" to be "Ready" ...
	I1013 23:16:21.445018  621879 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:16:21.445075  621879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:16:22.801580  621879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.256261146s)
	I1013 23:16:22.801641  621879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.173546186s)
	I1013 23:16:22.801921  621879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.813157038s)
	I1013 23:16:22.802063  621879 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.356976534s)
	I1013 23:16:22.802079  621879 api_server.go:72] duration metric: took 6.684855499s to wait for apiserver process to appear ...
	I1013 23:16:22.802085  621879 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:16:22.802102  621879 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:16:22.806774  621879 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-985461 addons enable metrics-server
	
	I1013 23:16:22.820296  621879 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1013 23:16:22.820537  621879 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 23:16:22.820556  621879 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 23:16:22.823885  621879 addons.go:514] duration metric: took 6.705948645s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1013 23:16:23.302310  621879 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 23:16:23.311809  621879 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 23:16:23.313119  621879 api_server.go:141] control plane version: v1.34.1
	I1013 23:16:23.313141  621879 api_server.go:131] duration metric: took 511.049869ms to wait for apiserver health ...
	I1013 23:16:23.313150  621879 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:16:23.317760  621879 system_pods.go:59] 8 kube-system pods found
	I1013 23:16:23.317795  621879 system_pods.go:61] "coredns-66bc5c9577-qz7kw" [3f6fa1b6-74d1-4d56-9f5c-0f06cc769b2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:16:23.317809  621879 system_pods.go:61] "etcd-no-preload-985461" [bbbecc6d-9254-4d56-a422-b727f5fce084] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:16:23.317819  621879 system_pods.go:61] "kindnet-ljpdl" [374474f9-4eef-4142-b969-273938b503bf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1013 23:16:23.317826  621879 system_pods.go:61] "kube-apiserver-no-preload-985461" [46ade205-4140-4846-afcb-8541a9dd00cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:16:23.317833  621879 system_pods.go:61] "kube-controller-manager-no-preload-985461" [b35785ac-5c01-4626-9f42-e6be92bad7fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:16:23.317840  621879 system_pods.go:61] "kube-proxy-24vhq" [186209cf-4abd-4f9d-925d-5ace9f59c705] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 23:16:23.317847  621879 system_pods.go:61] "kube-scheduler-no-preload-985461" [323f1904-f3e7-4509-9d7f-769877f7ab38] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:16:23.317853  621879 system_pods.go:61] "storage-provisioner" [dd5ba110-c29a-4fc8-b404-86e61c57b62f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:16:23.317859  621879 system_pods.go:74] duration metric: took 4.703495ms to wait for pod list to return data ...
	I1013 23:16:23.317867  621879 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:16:23.321554  621879 default_sa.go:45] found service account: "default"
	I1013 23:16:23.321619  621879 default_sa.go:55] duration metric: took 3.744892ms for default service account to be created ...
	I1013 23:16:23.321644  621879 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:16:23.324841  621879 system_pods.go:86] 8 kube-system pods found
	I1013 23:16:23.324922  621879 system_pods.go:89] "coredns-66bc5c9577-qz7kw" [3f6fa1b6-74d1-4d56-9f5c-0f06cc769b2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:16:23.324950  621879 system_pods.go:89] "etcd-no-preload-985461" [bbbecc6d-9254-4d56-a422-b727f5fce084] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:16:23.324976  621879 system_pods.go:89] "kindnet-ljpdl" [374474f9-4eef-4142-b969-273938b503bf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1013 23:16:23.325005  621879 system_pods.go:89] "kube-apiserver-no-preload-985461" [46ade205-4140-4846-afcb-8541a9dd00cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:16:23.325026  621879 system_pods.go:89] "kube-controller-manager-no-preload-985461" [b35785ac-5c01-4626-9f42-e6be92bad7fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:16:23.325048  621879 system_pods.go:89] "kube-proxy-24vhq" [186209cf-4abd-4f9d-925d-5ace9f59c705] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 23:16:23.325071  621879 system_pods.go:89] "kube-scheduler-no-preload-985461" [323f1904-f3e7-4509-9d7f-769877f7ab38] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:16:23.325103  621879 system_pods.go:89] "storage-provisioner" [dd5ba110-c29a-4fc8-b404-86e61c57b62f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:16:23.325126  621879 system_pods.go:126] duration metric: took 3.462125ms to wait for k8s-apps to be running ...
	I1013 23:16:23.325148  621879 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:16:23.325232  621879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:16:23.339396  621879 system_svc.go:56] duration metric: took 14.238523ms WaitForService to wait for kubelet
	I1013 23:16:23.339467  621879 kubeadm.go:586] duration metric: took 7.222241552s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:16:23.339500  621879 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:16:23.342470  621879 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:16:23.342546  621879 node_conditions.go:123] node cpu capacity is 2
	I1013 23:16:23.342573  621879 node_conditions.go:105] duration metric: took 3.050679ms to run NodePressure ...
	I1013 23:16:23.342598  621879 start.go:241] waiting for startup goroutines ...
	I1013 23:16:23.342626  621879 start.go:246] waiting for cluster config update ...
	I1013 23:16:23.342653  621879 start.go:255] writing updated cluster config ...
	I1013 23:16:23.342963  621879 ssh_runner.go:195] Run: rm -f paused
	I1013 23:16:23.346572  621879 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:16:23.350318  621879 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qz7kw" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 23:16:25.356817  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	W1013 23:16:27.367945  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 13 23:16:14 embed-certs-505482 crio[842]: time="2025-10-13T23:16:14.676193789Z" level=info msg="Created container 413964f596a5374bc5d2737a9758d4bd3afd9131009e906c6660f17ce57c985c: kube-system/coredns-66bc5c9577-6rtz5/coredns" id=afffbe19-d3e5-41f9-b207-c72110b84028 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:16:14 embed-certs-505482 crio[842]: time="2025-10-13T23:16:14.67745618Z" level=info msg="Starting container: 413964f596a5374bc5d2737a9758d4bd3afd9131009e906c6660f17ce57c985c" id=b3277c21-cd3d-4c2e-871b-56584605dbda name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:16:14 embed-certs-505482 crio[842]: time="2025-10-13T23:16:14.67965945Z" level=info msg="Started container" PID=1736 containerID=413964f596a5374bc5d2737a9758d4bd3afd9131009e906c6660f17ce57c985c description=kube-system/coredns-66bc5c9577-6rtz5/coredns id=b3277c21-cd3d-4c2e-871b-56584605dbda name=/runtime.v1.RuntimeService/StartContainer sandboxID=ba3945bd7c864c07579f808ab8fcc6e6812dad1e89a6bc1fde9ce4ee1bc1b182
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.004026002Z" level=info msg="Running pod sandbox: default/busybox/POD" id=368797d1-145e-4155-b1d0-21fe33d7e0f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.004110448Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.024671797Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3b17744488cd1153781523df46fb6b30c0384a1f4e5bec33d901f46c99109e45 UID:86067663-4b7a-4a32-b34b-a4256970748a NetNS:/var/run/netns/576ebaa8-c193-4375-8372-8c8246e4d0f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b2d8}] Aliases:map[]}"
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.024719599Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.038841842Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3b17744488cd1153781523df46fb6b30c0384a1f4e5bec33d901f46c99109e45 UID:86067663-4b7a-4a32-b34b-a4256970748a NetNS:/var/run/netns/576ebaa8-c193-4375-8372-8c8246e4d0f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b2d8}] Aliases:map[]}"
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.039020013Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.045275698Z" level=info msg="Ran pod sandbox 3b17744488cd1153781523df46fb6b30c0384a1f4e5bec33d901f46c99109e45 with infra container: default/busybox/POD" id=368797d1-145e-4155-b1d0-21fe33d7e0f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.075564285Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=67581aa9-6034-4617-8100-bdeb776287e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.075736253Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=67581aa9-6034-4617-8100-bdeb776287e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.07585095Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=67581aa9-6034-4617-8100-bdeb776287e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.080401349Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d6478589-e68a-4d89-b28e-5f2b821f65cd name=/runtime.v1.ImageService/PullImage
	Oct 13 23:16:18 embed-certs-505482 crio[842]: time="2025-10-13T23:16:18.084793843Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 23:16:20 embed-certs-505482 crio[842]: time="2025-10-13T23:16:20.150259438Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=d6478589-e68a-4d89-b28e-5f2b821f65cd name=/runtime.v1.ImageService/PullImage
	Oct 13 23:16:20 embed-certs-505482 crio[842]: time="2025-10-13T23:16:20.151061506Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9f36334c-f184-4664-bc13-aad40f618f87 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:16:20 embed-certs-505482 crio[842]: time="2025-10-13T23:16:20.15325489Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dbe6a691-ebc2-42fc-99c7-f86a1221df1f name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:16:20 embed-certs-505482 crio[842]: time="2025-10-13T23:16:20.160441904Z" level=info msg="Creating container: default/busybox/busybox" id=cde3b442-40d3-4994-81af-1ae0f83b42c8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:16:20 embed-certs-505482 crio[842]: time="2025-10-13T23:16:20.161441351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:16:20 embed-certs-505482 crio[842]: time="2025-10-13T23:16:20.166759512Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:16:20 embed-certs-505482 crio[842]: time="2025-10-13T23:16:20.167373736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:16:20 embed-certs-505482 crio[842]: time="2025-10-13T23:16:20.192864356Z" level=info msg="Created container ebf820bc67963afb5eb9cca28fe2cd791acca529deddfdd998b3a46f3c2ef45d: default/busybox/busybox" id=cde3b442-40d3-4994-81af-1ae0f83b42c8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:16:20 embed-certs-505482 crio[842]: time="2025-10-13T23:16:20.193760888Z" level=info msg="Starting container: ebf820bc67963afb5eb9cca28fe2cd791acca529deddfdd998b3a46f3c2ef45d" id=3f6d2e0d-6e5c-4f16-992f-8062b2baae30 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:16:20 embed-certs-505482 crio[842]: time="2025-10-13T23:16:20.198506704Z" level=info msg="Started container" PID=1788 containerID=ebf820bc67963afb5eb9cca28fe2cd791acca529deddfdd998b3a46f3c2ef45d description=default/busybox/busybox id=3f6d2e0d-6e5c-4f16-992f-8062b2baae30 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b17744488cd1153781523df46fb6b30c0384a1f4e5bec33d901f46c99109e45
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ebf820bc67963       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   3b17744488cd1       busybox                                      default
	413964f596a53       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   ba3945bd7c864       coredns-66bc5c9577-6rtz5                     kube-system
	bd0c017679cd4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   890cacad95031       storage-provisioner                          kube-system
	82e4cb1f02dfb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   20844cf5800c6       kube-proxy-n2g5d                             kube-system
	7de15e776a8a6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   f341dfa8d93b9       kindnet-zf5h8                                kube-system
	42874015afa80       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   e25805210d0ae       kube-apiserver-embed-certs-505482            kube-system
	e7063adeec15d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   b2c2e0feb9c3d       etcd-embed-certs-505482                      kube-system
	46dae195c2ff0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   2c0d905d94ff3       kube-scheduler-embed-certs-505482            kube-system
	dceb8b68ad84e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   7616c4a9afbf3       kube-controller-manager-embed-certs-505482   kube-system
	
	
	==> coredns [413964f596a5374bc5d2737a9758d4bd3afd9131009e906c6660f17ce57c985c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54072 - 37189 "HINFO IN 6316116288368794018.7540937881468921223. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024428151s
	
	
	==> describe nodes <==
	Name:               embed-certs-505482
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-505482
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=embed-certs-505482
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_15_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:15:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-505482
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:16:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:16:28 +0000   Mon, 13 Oct 2025 23:15:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:16:28 +0000   Mon, 13 Oct 2025 23:15:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:16:28 +0000   Mon, 13 Oct 2025 23:15:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:16:28 +0000   Mon, 13 Oct 2025 23:16:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-505482
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                19aef056-c1a4-490a-8aaa-19c46d6c5605
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-6rtz5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-embed-certs-505482                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-zf5h8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-505482             250m (12%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-controller-manager-embed-certs-505482    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-n2g5d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-505482             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node embed-certs-505482 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node embed-certs-505482 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 73s)  kubelet          Node embed-certs-505482 status is now: NodeHasSufficientPID
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node embed-certs-505482 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node embed-certs-505482 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node embed-certs-505482 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node embed-certs-505482 event: Registered Node embed-certs-505482 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-505482 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct13 22:52] overlayfs: idmapped layers are currently not supported
	[Oct13 22:53] overlayfs: idmapped layers are currently not supported
	[Oct13 22:54] overlayfs: idmapped layers are currently not supported
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	[Oct13 23:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e7063adeec15d83f026a3e957ea1a70c6b524e9405257b8f12bbcb56c9c96048] <==
	{"level":"warn","ts":"2025-10-13T23:15:21.698246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:21.714701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:21.745107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:21.770204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:21.825371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:21.834206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:21.871768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:21.908070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:21.929630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:21.961750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:21.988235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.024078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.063902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.106540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.139354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.166621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.198426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.234429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.276157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.303774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.319598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.358502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.376184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.395752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:15:22.517188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46858","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:16:29 up  2:58,  0 user,  load average: 3.93, 3.30, 2.65
	Linux embed-certs-505482 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7de15e776a8a6edf1ef002c468b19c0d8f0863c9e737512b4272db8a05131cf5] <==
	I1013 23:15:33.620495       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:15:33.620802       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 23:15:33.620945       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:15:33.620958       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:15:33.620972       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:15:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:15:33.905027       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:15:33.905127       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:15:33.905175       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:15:33.906421       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:16:03.906531       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:16:03.906531       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 23:16:03.906638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 23:16:03.906729       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1013 23:16:05.306081       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:16:05.306120       1 metrics.go:72] Registering metrics
	I1013 23:16:05.306191       1 controller.go:711] "Syncing nftables rules"
	I1013 23:16:13.906019       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 23:16:13.906136       1 main.go:301] handling current node
	I1013 23:16:23.905887       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 23:16:23.906012       1 main.go:301] handling current node
	
	
	==> kube-apiserver [42874015afa80732d3a8a65df792813f6f9b332f7ee3b84b9ab5c43a14ac6696] <==
	I1013 23:15:23.852296       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 23:15:23.920245       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:15:23.940365       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 23:15:23.980854       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:15:24.307758       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 23:15:24.350364       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 23:15:24.354926       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:15:25.822494       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:15:25.885035       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:15:25.992312       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 23:15:26.001209       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1013 23:15:26.003877       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:15:26.012127       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1013 23:15:26.650173       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="PATCH" URI="/api/v1/namespaces/kube-system/pods/kube-apiserver-embed-certs-505482/status" auditID="ede388d6-24b4-46a1-b8ba-3b9202244aec"
	E1013 23:15:26.650220       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.973µs" method="PATCH" path="/api/v1/namespaces/kube-system/pods/kube-apiserver-embed-certs-505482/status" result=null
	{"level":"warn","ts":"2025-10-13T23:15:26.654305Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013a01e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1013 23:15:26.654528       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 4.693345ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	I1013 23:15:26.676105       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:15:26.847213       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:15:26.883731       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 23:15:26.920095       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 23:15:31.834154       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:15:32.639748       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 23:15:32.996960       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:15:33.035510       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [dceb8b68ad84e8006d74a0b1b9009620be430512dbd702371ca3dff776b5c9d7] <==
	I1013 23:15:31.742088       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:15:31.750976       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 23:15:31.751168       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 23:15:31.751206       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 23:15:31.751218       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 23:15:31.751225       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 23:15:31.754090       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:15:31.759396       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 23:15:31.773911       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 23:15:31.778868       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:15:31.779032       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 23:15:31.779159       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:15:31.779311       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 23:15:31.780302       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 23:15:31.780423       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 23:15:31.780469       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 23:15:31.780537       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-505482"
	I1013 23:15:31.780578       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 23:15:31.780611       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:15:31.780642       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 23:15:31.780703       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:15:31.784842       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:15:31.791345       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 23:15:31.799514       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-505482" podCIDRs=["10.244.0.0/24"]
	I1013 23:16:16.787121       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [82e4cb1f02dfbd639118f14e99643f438334fa5cfa7357e2016a2445b2d0dfd7] <==
	I1013 23:15:33.638724       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:15:33.728944       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:15:33.834802       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:15:33.834845       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 23:15:33.834914       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:15:33.866367       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:15:33.866496       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:15:33.870380       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:15:33.870743       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:15:33.870921       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:15:33.872240       1 config.go:200] "Starting service config controller"
	I1013 23:15:33.872333       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:15:33.872377       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:15:33.872405       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:15:33.872440       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:15:33.872465       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:15:33.873217       1 config.go:309] "Starting node config controller"
	I1013 23:15:33.873268       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:15:33.873298       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:15:33.976695       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:15:33.977044       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 23:15:33.979301       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [46dae195c2ff0425cae6161961b10aea4d9c19d5c7d0d0e32da0dd8f01899447] <==
	I1013 23:15:24.775816       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:15:24.778464       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:15:24.783694       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:15:24.785606       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:15:24.785648       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1013 23:15:24.809826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 23:15:24.809965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 23:15:24.810021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 23:15:24.810071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 23:15:24.810145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 23:15:24.824348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 23:15:24.824432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 23:15:24.824495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 23:15:24.824545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 23:15:24.824613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 23:15:24.824676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 23:15:24.833126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 23:15:24.833371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 23:15:24.833440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 23:15:24.833500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 23:15:24.833670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1013 23:15:24.833794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 23:15:24.833856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 23:15:24.834469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1013 23:15:25.985923       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:15:31 embed-certs-505482 kubelet[1302]: I1013 23:15:31.828585    1302 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 13 23:15:31 embed-certs-505482 kubelet[1302]: I1013 23:15:31.830066    1302 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 13 23:15:32 embed-certs-505482 kubelet[1302]: I1013 23:15:32.935336    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb87c\" (UniqueName: \"kubernetes.io/projected/efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d-kube-api-access-vb87c\") pod \"kube-proxy-n2g5d\" (UID: \"efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d\") " pod="kube-system/kube-proxy-n2g5d"
	Oct 13 23:15:32 embed-certs-505482 kubelet[1302]: I1013 23:15:32.935391    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d-xtables-lock\") pod \"kube-proxy-n2g5d\" (UID: \"efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d\") " pod="kube-system/kube-proxy-n2g5d"
	Oct 13 23:15:32 embed-certs-505482 kubelet[1302]: I1013 23:15:32.935414    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7567865c-bc2d-41f2-9515-bf3a0c1d5f61-cni-cfg\") pod \"kindnet-zf5h8\" (UID: \"7567865c-bc2d-41f2-9515-bf3a0c1d5f61\") " pod="kube-system/kindnet-zf5h8"
	Oct 13 23:15:32 embed-certs-505482 kubelet[1302]: I1013 23:15:32.935432    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7567865c-bc2d-41f2-9515-bf3a0c1d5f61-xtables-lock\") pod \"kindnet-zf5h8\" (UID: \"7567865c-bc2d-41f2-9515-bf3a0c1d5f61\") " pod="kube-system/kindnet-zf5h8"
	Oct 13 23:15:32 embed-certs-505482 kubelet[1302]: I1013 23:15:32.935452    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d-kube-proxy\") pod \"kube-proxy-n2g5d\" (UID: \"efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d\") " pod="kube-system/kube-proxy-n2g5d"
	Oct 13 23:15:32 embed-certs-505482 kubelet[1302]: I1013 23:15:32.935475    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d-lib-modules\") pod \"kube-proxy-n2g5d\" (UID: \"efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d\") " pod="kube-system/kube-proxy-n2g5d"
	Oct 13 23:15:32 embed-certs-505482 kubelet[1302]: I1013 23:15:32.935491    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrpg4\" (UniqueName: \"kubernetes.io/projected/7567865c-bc2d-41f2-9515-bf3a0c1d5f61-kube-api-access-xrpg4\") pod \"kindnet-zf5h8\" (UID: \"7567865c-bc2d-41f2-9515-bf3a0c1d5f61\") " pod="kube-system/kindnet-zf5h8"
	Oct 13 23:15:32 embed-certs-505482 kubelet[1302]: I1013 23:15:32.935508    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7567865c-bc2d-41f2-9515-bf3a0c1d5f61-lib-modules\") pod \"kindnet-zf5h8\" (UID: \"7567865c-bc2d-41f2-9515-bf3a0c1d5f61\") " pod="kube-system/kindnet-zf5h8"
	Oct 13 23:15:33 embed-certs-505482 kubelet[1302]: I1013 23:15:33.149944    1302 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 23:15:33 embed-certs-505482 kubelet[1302]: W1013 23:15:33.457323    1302 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/crio-f341dfa8d93b9aadc2338b8db8387bd9216f7810bb70f173f8fce64be5456000 WatchSource:0}: Error finding container f341dfa8d93b9aadc2338b8db8387bd9216f7810bb70f173f8fce64be5456000: Status 404 returned error can't find the container with id f341dfa8d93b9aadc2338b8db8387bd9216f7810bb70f173f8fce64be5456000
	Oct 13 23:15:33 embed-certs-505482 kubelet[1302]: W1013 23:15:33.458097    1302 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/crio-20844cf5800c60ba9582457c8473089e3bb138ff98d07c1d0ea9e60124382168 WatchSource:0}: Error finding container 20844cf5800c60ba9582457c8473089e3bb138ff98d07c1d0ea9e60124382168: Status 404 returned error can't find the container with id 20844cf5800c60ba9582457c8473089e3bb138ff98d07c1d0ea9e60124382168
	Oct 13 23:15:34 embed-certs-505482 kubelet[1302]: I1013 23:15:34.145856    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zf5h8" podStartSLOduration=2.145836032 podStartE2EDuration="2.145836032s" podCreationTimestamp="2025-10-13 23:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:15:34.122731179 +0000 UTC m=+7.373250533" watchObservedRunningTime="2025-10-13 23:15:34.145836032 +0000 UTC m=+7.396355378"
	Oct 13 23:15:36 embed-certs-505482 kubelet[1302]: I1013 23:15:36.977829    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n2g5d" podStartSLOduration=4.977811976 podStartE2EDuration="4.977811976s" podCreationTimestamp="2025-10-13 23:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:15:34.167151925 +0000 UTC m=+7.417671279" watchObservedRunningTime="2025-10-13 23:15:36.977811976 +0000 UTC m=+10.228331330"
	Oct 13 23:16:14 embed-certs-505482 kubelet[1302]: I1013 23:16:14.119020    1302 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 23:16:14 embed-certs-505482 kubelet[1302]: I1013 23:16:14.242212    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg2xd\" (UniqueName: \"kubernetes.io/projected/1a2091eb-00b5-46b1-8f85-225c56508322-kube-api-access-jg2xd\") pod \"coredns-66bc5c9577-6rtz5\" (UID: \"1a2091eb-00b5-46b1-8f85-225c56508322\") " pod="kube-system/coredns-66bc5c9577-6rtz5"
	Oct 13 23:16:14 embed-certs-505482 kubelet[1302]: I1013 23:16:14.242421    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5bfj\" (UniqueName: \"kubernetes.io/projected/7c85e3a2-d20e-48ef-84ef-980fe6e2d40e-kube-api-access-q5bfj\") pod \"storage-provisioner\" (UID: \"7c85e3a2-d20e-48ef-84ef-980fe6e2d40e\") " pod="kube-system/storage-provisioner"
	Oct 13 23:16:14 embed-certs-505482 kubelet[1302]: I1013 23:16:14.242595    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a2091eb-00b5-46b1-8f85-225c56508322-config-volume\") pod \"coredns-66bc5c9577-6rtz5\" (UID: \"1a2091eb-00b5-46b1-8f85-225c56508322\") " pod="kube-system/coredns-66bc5c9577-6rtz5"
	Oct 13 23:16:14 embed-certs-505482 kubelet[1302]: I1013 23:16:14.242710    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7c85e3a2-d20e-48ef-84ef-980fe6e2d40e-tmp\") pod \"storage-provisioner\" (UID: \"7c85e3a2-d20e-48ef-84ef-980fe6e2d40e\") " pod="kube-system/storage-provisioner"
	Oct 13 23:16:14 embed-certs-505482 kubelet[1302]: W1013 23:16:14.565282    1302 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/crio-ba3945bd7c864c07579f808ab8fcc6e6812dad1e89a6bc1fde9ce4ee1bc1b182 WatchSource:0}: Error finding container ba3945bd7c864c07579f808ab8fcc6e6812dad1e89a6bc1fde9ce4ee1bc1b182: Status 404 returned error can't find the container with id ba3945bd7c864c07579f808ab8fcc6e6812dad1e89a6bc1fde9ce4ee1bc1b182
	Oct 13 23:16:15 embed-certs-505482 kubelet[1302]: I1013 23:16:15.237895    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6rtz5" podStartSLOduration=43.23787696 podStartE2EDuration="43.23787696s" podCreationTimestamp="2025-10-13 23:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:16:15.237698338 +0000 UTC m=+48.488217709" watchObservedRunningTime="2025-10-13 23:16:15.23787696 +0000 UTC m=+48.488396306"
	Oct 13 23:16:15 embed-certs-505482 kubelet[1302]: I1013 23:16:15.238022    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.238012736 podStartE2EDuration="41.238012736s" podCreationTimestamp="2025-10-13 23:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:16:15.218512526 +0000 UTC m=+48.469031872" watchObservedRunningTime="2025-10-13 23:16:15.238012736 +0000 UTC m=+48.488532082"
	Oct 13 23:16:17 embed-certs-505482 kubelet[1302]: I1013 23:16:17.769006    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqbrn\" (UniqueName: \"kubernetes.io/projected/86067663-4b7a-4a32-b34b-a4256970748a-kube-api-access-wqbrn\") pod \"busybox\" (UID: \"86067663-4b7a-4a32-b34b-a4256970748a\") " pod="default/busybox"
	Oct 13 23:16:20 embed-certs-505482 kubelet[1302]: I1013 23:16:20.242331    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.167396584 podStartE2EDuration="3.242312508s" podCreationTimestamp="2025-10-13 23:16:17 +0000 UTC" firstStartedPulling="2025-10-13 23:16:18.077129825 +0000 UTC m=+51.327649179" lastFinishedPulling="2025-10-13 23:16:20.152045757 +0000 UTC m=+53.402565103" observedRunningTime="2025-10-13 23:16:20.242050319 +0000 UTC m=+53.492569673" watchObservedRunningTime="2025-10-13 23:16:20.242312508 +0000 UTC m=+53.492831854"
	
	
	==> storage-provisioner [bd0c017679cd4fd45f18c7d0791f35b34c1cb1cd8e3b4f072c0991e8b84bc381] <==
	I1013 23:16:14.656264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 23:16:14.674926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 23:16:14.674974       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 23:16:14.687047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:14.699595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:16:14.699815       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 23:16:14.702538       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-505482_6c1983e3-1665-4a81-9b72-0fe6f83a5ab8!
	I1013 23:16:14.703215       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"37da86f2-4daf-4130-84ca-e44ec1613cc8", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-505482_6c1983e3-1665-4a81-9b72-0fe6f83a5ab8 became leader
	W1013 23:16:14.709366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:14.718080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:16:14.803627       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-505482_6c1983e3-1665-4a81-9b72-0fe6f83a5ab8!
	W1013 23:16:16.721710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:16.727348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:18.729973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:18.737081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:20.739777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:20.744574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:22.748202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:22.757517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:24.760815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:24.765387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:26.768853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:26.792727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:28.796334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:28.804120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-505482 -n embed-certs-505482
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-505482 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-985461 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-985461 --alsologtostderr -v=1: exit status 80 (2.605414766s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-985461 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 23:17:10.790455  626961 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:17:10.790651  626961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:17:10.790838  626961 out.go:374] Setting ErrFile to fd 2...
	I1013 23:17:10.790860  626961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:17:10.791317  626961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:17:10.791755  626961 out.go:368] Setting JSON to false
	I1013 23:17:10.791842  626961 mustload.go:65] Loading cluster: no-preload-985461
	I1013 23:17:10.792314  626961 config.go:182] Loaded profile config "no-preload-985461": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:17:10.793706  626961 cli_runner.go:164] Run: docker container inspect no-preload-985461 --format={{.State.Status}}
	I1013 23:17:10.814923  626961 host.go:66] Checking if "no-preload-985461" exists ...
	I1013 23:17:10.815373  626961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:17:10.892073  626961 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-13 23:17:10.880955626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:17:10.892821  626961 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-985461 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 23:17:10.897342  626961 out.go:179] * Pausing node no-preload-985461 ... 
	I1013 23:17:10.900249  626961 host.go:66] Checking if "no-preload-985461" exists ...
	I1013 23:17:10.900849  626961 ssh_runner.go:195] Run: systemctl --version
	I1013 23:17:10.900927  626961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-985461
	I1013 23:17:10.928938  626961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/no-preload-985461/id_rsa Username:docker}
	I1013 23:17:11.035212  626961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:17:11.065065  626961 pause.go:52] kubelet running: true
	I1013 23:17:11.065175  626961 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:17:11.371177  626961 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:17:11.371270  626961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:17:11.466498  626961 cri.go:89] found id: "0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893"
	I1013 23:17:11.466519  626961 cri.go:89] found id: "724b420b38a8e6b1d29e11c05f3913668233120711d3308275301eca8aaa8fd5"
	I1013 23:17:11.466524  626961 cri.go:89] found id: "479a4c6a54e2e68afddb1aa673dc26a32c4ad999c480d6344380a6d38afa6fce"
	I1013 23:17:11.466528  626961 cri.go:89] found id: "dd48b184df6b143bf67e927d5aded7eb332ee9943358347aa34f17b9d3a0e99a"
	I1013 23:17:11.466531  626961 cri.go:89] found id: "8323fd99c8ddeb0e49de9a6be3e47d906e010bcdf3332b71881843c6b8fea91d"
	I1013 23:17:11.466535  626961 cri.go:89] found id: "ca197e5ccabade51478fd7728ee7b5ca28a1bdcb05fde64e5acff9535fc178cc"
	I1013 23:17:11.466538  626961 cri.go:89] found id: "e6dcc041a964a5908141d63a7f42e4506831ad6091f46c16ae0d0d31a11158dd"
	I1013 23:17:11.466541  626961 cri.go:89] found id: "ad4b2abb5a0c03dab14186bfcfe871a8269efe62dea94aa86fb792c8533ea086"
	I1013 23:17:11.466544  626961 cri.go:89] found id: "94ced949d329ca42c57c0dcc0ab094d100a77886a09898107cad3e81fce3ff81"
	I1013 23:17:11.466550  626961 cri.go:89] found id: "a981ae5b4f20098f6e1818b72d9c111968a396c9fa165c85bbee0a671f77046f"
	I1013 23:17:11.466554  626961 cri.go:89] found id: "5f46110128fb81d270b1cec6e2b2f4f4bf290629ba0e722e52328c484d8606b7"
	I1013 23:17:11.466557  626961 cri.go:89] found id: ""
	I1013 23:17:11.466604  626961 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:17:11.481643  626961 retry.go:31] will retry after 151.196475ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:17:11Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:17:11.634080  626961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:17:11.648208  626961 pause.go:52] kubelet running: false
	I1013 23:17:11.648280  626961 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:17:11.857862  626961 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:17:11.858048  626961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:17:11.957173  626961 cri.go:89] found id: "0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893"
	I1013 23:17:11.957195  626961 cri.go:89] found id: "724b420b38a8e6b1d29e11c05f3913668233120711d3308275301eca8aaa8fd5"
	I1013 23:17:11.957201  626961 cri.go:89] found id: "479a4c6a54e2e68afddb1aa673dc26a32c4ad999c480d6344380a6d38afa6fce"
	I1013 23:17:11.957204  626961 cri.go:89] found id: "dd48b184df6b143bf67e927d5aded7eb332ee9943358347aa34f17b9d3a0e99a"
	I1013 23:17:11.957208  626961 cri.go:89] found id: "8323fd99c8ddeb0e49de9a6be3e47d906e010bcdf3332b71881843c6b8fea91d"
	I1013 23:17:11.957212  626961 cri.go:89] found id: "ca197e5ccabade51478fd7728ee7b5ca28a1bdcb05fde64e5acff9535fc178cc"
	I1013 23:17:11.957215  626961 cri.go:89] found id: "e6dcc041a964a5908141d63a7f42e4506831ad6091f46c16ae0d0d31a11158dd"
	I1013 23:17:11.957218  626961 cri.go:89] found id: "ad4b2abb5a0c03dab14186bfcfe871a8269efe62dea94aa86fb792c8533ea086"
	I1013 23:17:11.957221  626961 cri.go:89] found id: "94ced949d329ca42c57c0dcc0ab094d100a77886a09898107cad3e81fce3ff81"
	I1013 23:17:11.957227  626961 cri.go:89] found id: "a981ae5b4f20098f6e1818b72d9c111968a396c9fa165c85bbee0a671f77046f"
	I1013 23:17:11.957230  626961 cri.go:89] found id: "5f46110128fb81d270b1cec6e2b2f4f4bf290629ba0e722e52328c484d8606b7"
	I1013 23:17:11.957233  626961 cri.go:89] found id: ""
	I1013 23:17:11.957317  626961 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:17:11.969116  626961 retry.go:31] will retry after 267.713149ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:17:11Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:17:12.237623  626961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:17:12.253643  626961 pause.go:52] kubelet running: false
	I1013 23:17:12.253759  626961 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:17:12.451998  626961 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:17:12.452140  626961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:17:12.534932  626961 cri.go:89] found id: "0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893"
	I1013 23:17:12.535005  626961 cri.go:89] found id: "724b420b38a8e6b1d29e11c05f3913668233120711d3308275301eca8aaa8fd5"
	I1013 23:17:12.535026  626961 cri.go:89] found id: "479a4c6a54e2e68afddb1aa673dc26a32c4ad999c480d6344380a6d38afa6fce"
	I1013 23:17:12.535044  626961 cri.go:89] found id: "dd48b184df6b143bf67e927d5aded7eb332ee9943358347aa34f17b9d3a0e99a"
	I1013 23:17:12.535104  626961 cri.go:89] found id: "8323fd99c8ddeb0e49de9a6be3e47d906e010bcdf3332b71881843c6b8fea91d"
	I1013 23:17:12.535127  626961 cri.go:89] found id: "ca197e5ccabade51478fd7728ee7b5ca28a1bdcb05fde64e5acff9535fc178cc"
	I1013 23:17:12.535160  626961 cri.go:89] found id: "e6dcc041a964a5908141d63a7f42e4506831ad6091f46c16ae0d0d31a11158dd"
	I1013 23:17:12.535183  626961 cri.go:89] found id: "ad4b2abb5a0c03dab14186bfcfe871a8269efe62dea94aa86fb792c8533ea086"
	I1013 23:17:12.535203  626961 cri.go:89] found id: "94ced949d329ca42c57c0dcc0ab094d100a77886a09898107cad3e81fce3ff81"
	I1013 23:17:12.535241  626961 cri.go:89] found id: "a981ae5b4f20098f6e1818b72d9c111968a396c9fa165c85bbee0a671f77046f"
	I1013 23:17:12.535267  626961 cri.go:89] found id: "5f46110128fb81d270b1cec6e2b2f4f4bf290629ba0e722e52328c484d8606b7"
	I1013 23:17:12.535287  626961 cri.go:89] found id: ""
	I1013 23:17:12.535373  626961 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:17:12.553528  626961 retry.go:31] will retry after 432.507125ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:17:12Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:17:12.987056  626961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:17:13.001041  626961 pause.go:52] kubelet running: false
	I1013 23:17:13.001141  626961 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:17:13.197517  626961 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:17:13.197605  626961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:17:13.293511  626961 cri.go:89] found id: "0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893"
	I1013 23:17:13.293539  626961 cri.go:89] found id: "724b420b38a8e6b1d29e11c05f3913668233120711d3308275301eca8aaa8fd5"
	I1013 23:17:13.293545  626961 cri.go:89] found id: "479a4c6a54e2e68afddb1aa673dc26a32c4ad999c480d6344380a6d38afa6fce"
	I1013 23:17:13.293549  626961 cri.go:89] found id: "dd48b184df6b143bf67e927d5aded7eb332ee9943358347aa34f17b9d3a0e99a"
	I1013 23:17:13.293552  626961 cri.go:89] found id: "8323fd99c8ddeb0e49de9a6be3e47d906e010bcdf3332b71881843c6b8fea91d"
	I1013 23:17:13.293556  626961 cri.go:89] found id: "ca197e5ccabade51478fd7728ee7b5ca28a1bdcb05fde64e5acff9535fc178cc"
	I1013 23:17:13.293559  626961 cri.go:89] found id: "e6dcc041a964a5908141d63a7f42e4506831ad6091f46c16ae0d0d31a11158dd"
	I1013 23:17:13.293562  626961 cri.go:89] found id: "ad4b2abb5a0c03dab14186bfcfe871a8269efe62dea94aa86fb792c8533ea086"
	I1013 23:17:13.293564  626961 cri.go:89] found id: "94ced949d329ca42c57c0dcc0ab094d100a77886a09898107cad3e81fce3ff81"
	I1013 23:17:13.293571  626961 cri.go:89] found id: "a981ae5b4f20098f6e1818b72d9c111968a396c9fa165c85bbee0a671f77046f"
	I1013 23:17:13.293574  626961 cri.go:89] found id: "5f46110128fb81d270b1cec6e2b2f4f4bf290629ba0e722e52328c484d8606b7"
	I1013 23:17:13.293577  626961 cri.go:89] found id: ""
	I1013 23:17:13.293626  626961 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:17:13.313041  626961 out.go:203] 
	W1013 23:17:13.316172  626961 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:17:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:17:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 23:17:13.316196  626961 out.go:285] * 
	* 
	W1013 23:17:13.324218  626961 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 23:17:13.327480  626961 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-985461 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-985461
helpers_test.go:243: (dbg) docker inspect no-preload-985461:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad",
	        "Created": "2025-10-13T23:14:18.084587368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 622005,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:16:08.216214039Z",
	            "FinishedAt": "2025-10-13T23:16:07.36824999Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/hostname",
	        "HostsPath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/hosts",
	        "LogPath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad-json.log",
	        "Name": "/no-preload-985461",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-985461:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-985461",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad",
	                "LowerDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-985461",
	                "Source": "/var/lib/docker/volumes/no-preload-985461/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-985461",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-985461",
	                "name.minikube.sigs.k8s.io": "no-preload-985461",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "450f141f5032548cab588386fd6dc762e18fb90a52718ff1d514a19f128e9860",
	            "SandboxKey": "/var/run/docker/netns/450f141f5032",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-985461": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:01:4f:ec:61:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d2b0b0019112f54c353afbc5f1c7d7acc1a1a4608af0cb49812ab4cf98cbb0b9",
	                    "EndpointID": "b7afd0ca1770e74a4e8d26e227f3e40722382d84eb0973091125417b010ed978",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-985461",
	                        "24722b872d75"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-985461 -n no-preload-985461
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-985461 -n no-preload-985461: exit status 2 (453.618508ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-985461 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-985461 logs -n 25: (1.624053807s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-051941 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-051941    │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ delete  │ -p cert-options-051941                                                                                                                                                                                                                        │ cert-options-051941    │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:12 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │                     │
	│ stop    │ -p old-k8s-version-670275 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │ 13 Oct 25 23:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ image   │ old-k8s-version-670275 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ pause   │ -p old-k8s-version-670275 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │                     │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:15 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-896873 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p cert-expiration-896873                                                                                                                                                                                                                     │ cert-expiration-896873 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482     │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-985461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │                     │
	│ stop    │ -p no-preload-985461 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p no-preload-985461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-505482     │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	│ stop    │ -p embed-certs-505482 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-505482     │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-505482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-505482     │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482     │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	│ image   │ no-preload-985461 image list --format=json                                                                                                                                                                                                    │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p no-preload-985461 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:16:42
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:16:42.964614  624746 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:16:42.964804  624746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:16:42.964818  624746 out.go:374] Setting ErrFile to fd 2...
	I1013 23:16:42.964824  624746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:16:42.965135  624746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:16:42.965618  624746 out.go:368] Setting JSON to false
	I1013 23:16:42.966698  624746 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10739,"bootTime":1760386664,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:16:42.966771  624746 start.go:141] virtualization:  
	I1013 23:16:42.970372  624746 out.go:179] * [embed-certs-505482] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:16:42.973420  624746 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:16:42.973582  624746 notify.go:220] Checking for updates...
	I1013 23:16:42.976734  624746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:16:42.979743  624746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:16:42.982722  624746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:16:42.985720  624746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:16:42.988697  624746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:16:42.992170  624746 config.go:182] Loaded profile config "embed-certs-505482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:16:42.992824  624746 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:16:43.025996  624746 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:16:43.026121  624746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:16:43.101448  624746 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:16:43.089909136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:16:43.101571  624746 docker.go:318] overlay module found
	I1013 23:16:43.104942  624746 out.go:179] * Using the docker driver based on existing profile
	I1013 23:16:43.108368  624746 start.go:305] selected driver: docker
	I1013 23:16:43.108389  624746 start.go:925] validating driver "docker" against &{Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:16:43.108578  624746 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:16:43.109602  624746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:16:43.200526  624746 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:16:43.189361347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:16:43.200881  624746 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:16:43.200912  624746 cni.go:84] Creating CNI manager for ""
	I1013 23:16:43.200965  624746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:16:43.200997  624746 start.go:349] cluster config:
	{Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:16:43.204431  624746 out.go:179] * Starting "embed-certs-505482" primary control-plane node in "embed-certs-505482" cluster
	I1013 23:16:43.207434  624746 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:16:43.210696  624746 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:16:43.213584  624746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:16:43.213643  624746 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:16:43.213656  624746 cache.go:58] Caching tarball of preloaded images
	I1013 23:16:43.213679  624746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:16:43.213760  624746 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:16:43.213771  624746 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:16:43.213879  624746 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/config.json ...
	I1013 23:16:43.234329  624746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:16:43.234347  624746 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:16:43.234366  624746 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:16:43.234388  624746 start.go:360] acquireMachinesLock for embed-certs-505482: {Name:mk60574f1c53ab31d166b72e157fd38e1fef9702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:16:43.234437  624746 start.go:364] duration metric: took 32.951µs to acquireMachinesLock for "embed-certs-505482"
	I1013 23:16:43.234456  624746 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:16:43.234467  624746 fix.go:54] fixHost starting: 
	I1013 23:16:43.234884  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:43.255803  624746 fix.go:112] recreateIfNeeded on embed-certs-505482: state=Stopped err=<nil>
	W1013 23:16:43.255836  624746 fix.go:138] unexpected machine state, will restart: <nil>
	W1013 23:16:44.856516  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	W1013 23:16:46.856924  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	I1013 23:16:43.259350  624746 out.go:252] * Restarting existing docker container for "embed-certs-505482" ...
	I1013 23:16:43.259449  624746 cli_runner.go:164] Run: docker start embed-certs-505482
	I1013 23:16:43.642295  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:43.667322  624746 kic.go:430] container "embed-certs-505482" state is running.
	I1013 23:16:43.667776  624746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-505482
	I1013 23:16:43.697538  624746 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/config.json ...
	I1013 23:16:43.698092  624746 machine.go:93] provisionDockerMachine start ...
	I1013 23:16:43.698163  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:43.730384  624746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:43.731003  624746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1013 23:16:43.731020  624746 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:16:43.732096  624746 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 23:16:46.887033  624746 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-505482
	
	I1013 23:16:46.887055  624746 ubuntu.go:182] provisioning hostname "embed-certs-505482"
	I1013 23:16:46.887149  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:46.908795  624746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:46.909101  624746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1013 23:16:46.909122  624746 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-505482 && echo "embed-certs-505482" | sudo tee /etc/hostname
	I1013 23:16:47.086159  624746 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-505482
	
	I1013 23:16:47.086303  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:47.115034  624746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:47.115498  624746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1013 23:16:47.115519  624746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-505482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-505482/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-505482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:16:47.271490  624746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:16:47.271519  624746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:16:47.271539  624746 ubuntu.go:190] setting up certificates
	I1013 23:16:47.271550  624746 provision.go:84] configureAuth start
	I1013 23:16:47.271613  624746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-505482
	I1013 23:16:47.294143  624746 provision.go:143] copyHostCerts
	I1013 23:16:47.294208  624746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:16:47.294225  624746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:16:47.294297  624746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:16:47.294389  624746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:16:47.294403  624746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:16:47.294431  624746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:16:47.294490  624746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:16:47.294500  624746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:16:47.294531  624746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:16:47.294590  624746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.embed-certs-505482 san=[127.0.0.1 192.168.76.2 embed-certs-505482 localhost minikube]
	I1013 23:16:48.394872  624746 provision.go:177] copyRemoteCerts
	I1013 23:16:48.394986  624746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:16:48.395059  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:48.413854  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:48.530000  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:16:48.561559  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 23:16:48.590752  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:16:48.620041  624746 provision.go:87] duration metric: took 1.34845917s to configureAuth
	I1013 23:16:48.620127  624746 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:16:48.620388  624746 config.go:182] Loaded profile config "embed-certs-505482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:16:48.620607  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:48.643621  624746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:48.644078  624746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1013 23:16:48.644107  624746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:16:49.047381  624746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:16:49.047447  624746 machine.go:96] duration metric: took 5.349338746s to provisionDockerMachine
	I1013 23:16:49.047475  624746 start.go:293] postStartSetup for "embed-certs-505482" (driver="docker")
	I1013 23:16:49.047500  624746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:16:49.047596  624746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:16:49.047664  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:49.084740  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:49.200770  624746 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:16:49.204833  624746 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:16:49.204905  624746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:16:49.204931  624746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:16:49.205016  624746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:16:49.205143  624746 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:16:49.205301  624746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:16:49.214064  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:16:49.234811  624746 start.go:296] duration metric: took 187.291282ms for postStartSetup
	I1013 23:16:49.234937  624746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:16:49.235016  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:49.260160  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:49.369499  624746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:16:49.375030  624746 fix.go:56] duration metric: took 6.140553684s for fixHost
	I1013 23:16:49.375056  624746 start.go:83] releasing machines lock for "embed-certs-505482", held for 6.140609839s
	I1013 23:16:49.375167  624746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-505482
	I1013 23:16:49.400842  624746 ssh_runner.go:195] Run: cat /version.json
	I1013 23:16:49.400898  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:49.400906  624746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:16:49.400986  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:49.433722  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:49.447335  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:49.681552  624746 ssh_runner.go:195] Run: systemctl --version
	I1013 23:16:49.688810  624746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:16:49.730899  624746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:16:49.735592  624746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:16:49.735692  624746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:16:49.743887  624746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:16:49.743910  624746 start.go:495] detecting cgroup driver to use...
	I1013 23:16:49.743941  624746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:16:49.743993  624746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:16:49.760068  624746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:16:49.774185  624746 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:16:49.774250  624746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:16:49.790296  624746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:16:49.804016  624746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:16:49.925591  624746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:16:50.037869  624746 docker.go:234] disabling docker service ...
	I1013 23:16:50.037998  624746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:16:50.054594  624746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:16:50.068537  624746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:16:50.190446  624746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:16:50.307013  624746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:16:50.320550  624746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:16:50.335368  624746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:16:50.335463  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.344921  624746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:16:50.345004  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.356772  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.367797  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.378056  624746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:16:50.387638  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.398398  624746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.408206  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.417060  624746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:16:50.424736  624746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:16:50.431803  624746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:16:50.543667  624746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:16:51.044843  624746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:16:51.044913  624746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:16:51.048822  624746 start.go:563] Will wait 60s for crictl version
	I1013 23:16:51.048895  624746 ssh_runner.go:195] Run: which crictl
	I1013 23:16:51.052693  624746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:16:51.078417  624746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:16:51.078500  624746 ssh_runner.go:195] Run: crio --version
	I1013 23:16:51.109148  624746 ssh_runner.go:195] Run: crio --version
	I1013 23:16:51.146988  624746 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:16:51.150408  624746 cli_runner.go:164] Run: docker network inspect embed-certs-505482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:16:51.167902  624746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 23:16:51.172125  624746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:16:51.182266  624746 kubeadm.go:883] updating cluster {Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:16:51.182391  624746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:16:51.182451  624746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:16:51.215495  624746 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:16:51.215519  624746 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:16:51.215579  624746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:16:51.245229  624746 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:16:51.245257  624746 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:16:51.245266  624746 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 23:16:51.245376  624746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-505482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:16:51.245474  624746 ssh_runner.go:195] Run: crio config
	I1013 23:16:51.319637  624746 cni.go:84] Creating CNI manager for ""
	I1013 23:16:51.319661  624746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:16:51.319726  624746 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:16:51.319764  624746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-505482 NodeName:embed-certs-505482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:16:51.319913  624746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-505482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:16:51.320016  624746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:16:51.328652  624746 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:16:51.328741  624746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:16:51.337002  624746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1013 23:16:51.349939  624746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:16:51.365473  624746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1013 23:16:51.378367  624746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:16:51.382132  624746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:16:51.391932  624746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:16:51.505640  624746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:16:51.522792  624746 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482 for IP: 192.168.76.2
	I1013 23:16:51.522863  624746 certs.go:195] generating shared ca certs ...
	I1013 23:16:51.522897  624746 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:51.523153  624746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:16:51.523238  624746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:16:51.523285  624746 certs.go:257] generating profile certs ...
	I1013 23:16:51.523424  624746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/client.key
	I1013 23:16:51.523523  624746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.key.049067f4
	I1013 23:16:51.523591  624746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.key
	I1013 23:16:51.523732  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:16:51.523809  624746 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:16:51.523846  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:16:51.523901  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:16:51.523965  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:16:51.524023  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:16:51.524097  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:16:51.524849  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:16:51.554072  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:16:51.590057  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:16:51.614436  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:16:51.633981  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1013 23:16:51.656587  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 23:16:51.679490  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:16:51.709263  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 23:16:51.733073  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:16:51.759383  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:16:51.779151  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:16:51.799675  624746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:16:51.813671  624746 ssh_runner.go:195] Run: openssl version
	I1013 23:16:51.820371  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:16:51.830177  624746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:16:51.833900  624746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:16:51.833973  624746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:16:51.880657  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:16:51.889000  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:16:51.900115  624746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:16:51.904023  624746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:16:51.904139  624746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:16:51.945166  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:16:51.953721  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:16:51.962393  624746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:16:51.966127  624746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:16:51.966205  624746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:16:52.008251  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:16:52.017077  624746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:16:52.021602  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:16:52.063647  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:16:52.114709  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:16:52.156188  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:16:52.198432  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:16:52.242999  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:16:52.301359  624746 kubeadm.go:400] StartCluster: {Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:16:52.301453  624746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:16:52.301522  624746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:16:52.363843  624746 cri.go:89] found id: "571a3921ae313b746dc750f163cd023508f28ff3bf97977e5f8f7faab03157e7"
	I1013 23:16:52.363871  624746 cri.go:89] found id: "dd86b0b8cf2e77ea5e9fb894aa6375e33bcdad7cd483eb155b4e5002125e49b7"
	I1013 23:16:52.363877  624746 cri.go:89] found id: ""
	I1013 23:16:52.363924  624746 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 23:16:52.384347  624746 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:16:52Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:16:52.384518  624746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:16:52.417755  624746 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:16:52.417816  624746 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:16:52.417899  624746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:16:52.432936  624746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:16:52.433567  624746 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-505482" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:16:52.433866  624746 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-428797/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-505482" cluster setting kubeconfig missing "embed-certs-505482" context setting]
	I1013 23:16:52.434397  624746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:52.436184  624746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:16:52.444493  624746 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1013 23:16:52.444566  624746 kubeadm.go:601] duration metric: took 26.729802ms to restartPrimaryControlPlane
	I1013 23:16:52.444591  624746 kubeadm.go:402] duration metric: took 143.242175ms to StartCluster
	I1013 23:16:52.444635  624746 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:52.444711  624746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:16:52.445961  624746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:52.446247  624746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:16:52.446718  624746 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:16:52.446786  624746 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-505482"
	I1013 23:16:52.446801  624746 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-505482"
	W1013 23:16:52.446807  624746 addons.go:247] addon storage-provisioner should already be in state true
	I1013 23:16:52.446830  624746 host.go:66] Checking if "embed-certs-505482" exists ...
	I1013 23:16:52.447240  624746 config.go:182] Loaded profile config "embed-certs-505482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:16:52.447640  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:52.447811  624746 addons.go:69] Setting dashboard=true in profile "embed-certs-505482"
	I1013 23:16:52.447847  624746 addons.go:238] Setting addon dashboard=true in "embed-certs-505482"
	W1013 23:16:52.447869  624746 addons.go:247] addon dashboard should already be in state true
	I1013 23:16:52.447920  624746 host.go:66] Checking if "embed-certs-505482" exists ...
	I1013 23:16:52.448416  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:52.452924  624746 out.go:179] * Verifying Kubernetes components...
	I1013 23:16:52.448551  624746 addons.go:69] Setting default-storageclass=true in profile "embed-certs-505482"
	I1013 23:16:52.454250  624746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-505482"
	I1013 23:16:52.454569  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:52.457614  624746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:16:52.521158  624746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:16:52.524772  624746 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 23:16:52.524947  624746 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:16:52.524970  624746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:16:52.525038  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:52.530141  624746 addons.go:238] Setting addon default-storageclass=true in "embed-certs-505482"
	W1013 23:16:52.530165  624746 addons.go:247] addon default-storageclass should already be in state true
	I1013 23:16:52.530189  624746 host.go:66] Checking if "embed-certs-505482" exists ...
	I1013 23:16:52.530606  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:52.530724  624746 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1013 23:16:48.857024  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	W1013 23:16:51.356633  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	I1013 23:16:52.533714  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 23:16:52.533740  624746 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 23:16:52.533808  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:52.574271  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:52.576547  624746 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:16:52.576567  624746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:16:52.576628  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:52.590329  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:52.619542  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:52.836432  624746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:16:52.864747  624746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:16:52.888519  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 23:16:52.888586  624746 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 23:16:52.900599  624746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1013 23:16:53.360006  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	W1013 23:16:55.855604  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	I1013 23:16:57.356594  621879 pod_ready.go:94] pod "coredns-66bc5c9577-qz7kw" is "Ready"
	I1013 23:16:57.356676  621879 pod_ready.go:86] duration metric: took 34.006287602s for pod "coredns-66bc5c9577-qz7kw" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.359414  621879 pod_ready.go:83] waiting for pod "etcd-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.363399  621879 pod_ready.go:94] pod "etcd-no-preload-985461" is "Ready"
	I1013 23:16:57.363474  621879 pod_ready.go:86] duration metric: took 4.036834ms for pod "etcd-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.365563  621879 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.369891  621879 pod_ready.go:94] pod "kube-apiserver-no-preload-985461" is "Ready"
	I1013 23:16:57.369963  621879 pod_ready.go:86] duration metric: took 4.33286ms for pod "kube-apiserver-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.372176  621879 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.554273  621879 pod_ready.go:94] pod "kube-controller-manager-no-preload-985461" is "Ready"
	I1013 23:16:57.554354  621879 pod_ready.go:86] duration metric: took 182.107947ms for pod "kube-controller-manager-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.754688  621879 pod_ready.go:83] waiting for pod "kube-proxy-24vhq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:52.979116  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 23:16:52.979196  624746 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 23:16:53.088322  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 23:16:53.088396  624746 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 23:16:53.162089  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 23:16:53.162163  624746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 23:16:53.182601  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 23:16:53.182678  624746 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 23:16:53.207593  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 23:16:53.207669  624746 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 23:16:53.240878  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 23:16:53.240952  624746 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 23:16:53.256695  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 23:16:53.256771  624746 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 23:16:53.270064  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:16:53.270136  624746 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 23:16:53.295328  624746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:16:58.154739  621879 pod_ready.go:94] pod "kube-proxy-24vhq" is "Ready"
	I1013 23:16:58.154762  621879 pod_ready.go:86] duration metric: took 400.000111ms for pod "kube-proxy-24vhq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:58.355291  621879 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:58.754918  621879 pod_ready.go:94] pod "kube-scheduler-no-preload-985461" is "Ready"
	I1013 23:16:58.754944  621879 pod_ready.go:86] duration metric: took 399.629526ms for pod "kube-scheduler-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:58.754957  621879 pod_ready.go:40] duration metric: took 35.408313176s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:16:58.860401  621879 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:16:58.863397  621879 out.go:179] * Done! kubectl is now configured to use "no-preload-985461" cluster and "default" namespace by default
	I1013 23:16:59.060047  624746 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.195225949s)
	I1013 23:16:59.060092  624746 node_ready.go:35] waiting up to 6m0s for node "embed-certs-505482" to be "Ready" ...
	I1013 23:16:59.060431  624746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.159764541s)
	I1013 23:16:59.061531  624746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.225013445s)
	I1013 23:16:59.119238  624746 node_ready.go:49] node "embed-certs-505482" is "Ready"
	I1013 23:16:59.119274  624746 node_ready.go:38] duration metric: took 59.159493ms for node "embed-certs-505482" to be "Ready" ...
	I1013 23:16:59.119289  624746 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:16:59.119348  624746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:16:59.216051  624746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.92063321s)
	I1013 23:16:59.216084  624746 api_server.go:72] duration metric: took 6.769780007s to wait for apiserver process to appear ...
	I1013 23:16:59.216098  624746 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:16:59.216132  624746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:16:59.219186  624746 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-505482 addons enable metrics-server
	
	I1013 23:16:59.222097  624746 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1013 23:16:59.225053  624746 addons.go:514] duration metric: took 6.77831956s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1013 23:16:59.234101  624746 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 23:16:59.234128  624746 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 23:16:59.716750  624746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:16:59.725824  624746 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 23:16:59.726977  624746 api_server.go:141] control plane version: v1.34.1
	I1013 23:16:59.727008  624746 api_server.go:131] duration metric: took 510.902459ms to wait for apiserver health ...
	I1013 23:16:59.727018  624746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:16:59.730458  624746 system_pods.go:59] 8 kube-system pods found
	I1013 23:16:59.730500  624746 system_pods.go:61] "coredns-66bc5c9577-6rtz5" [1a2091eb-00b5-46b1-8f85-225c56508322] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:16:59.730511  624746 system_pods.go:61] "etcd-embed-certs-505482" [4620d6b3-7695-45a4-88f6-9db5af3fa1a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:16:59.730520  624746 system_pods.go:61] "kindnet-zf5h8" [7567865c-bc2d-41f2-9515-bf3a0c1d5f61] Running
	I1013 23:16:59.730527  624746 system_pods.go:61] "kube-apiserver-embed-certs-505482" [8fb6166d-fbdc-4c34-a991-aa6cd95a5c29] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:16:59.730538  624746 system_pods.go:61] "kube-controller-manager-embed-certs-505482" [efb3e995-198d-4431-b436-c6b12435318d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:16:59.730544  624746 system_pods.go:61] "kube-proxy-n2g5d" [efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d] Running
	I1013 23:16:59.730550  624746 system_pods.go:61] "kube-scheduler-embed-certs-505482" [41e6ad1a-48c8-43af-88b0-e9bae19f3cd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:16:59.730555  624746 system_pods.go:61] "storage-provisioner" [7c85e3a2-d20e-48ef-84ef-980fe6e2d40e] Running
	I1013 23:16:59.730561  624746 system_pods.go:74] duration metric: took 3.537242ms to wait for pod list to return data ...
	I1013 23:16:59.730571  624746 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:16:59.733179  624746 default_sa.go:45] found service account: "default"
	I1013 23:16:59.733207  624746 default_sa.go:55] duration metric: took 2.626729ms for default service account to be created ...
	I1013 23:16:59.733218  624746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:16:59.736534  624746 system_pods.go:86] 8 kube-system pods found
	I1013 23:16:59.736581  624746 system_pods.go:89] "coredns-66bc5c9577-6rtz5" [1a2091eb-00b5-46b1-8f85-225c56508322] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:16:59.736598  624746 system_pods.go:89] "etcd-embed-certs-505482" [4620d6b3-7695-45a4-88f6-9db5af3fa1a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:16:59.736604  624746 system_pods.go:89] "kindnet-zf5h8" [7567865c-bc2d-41f2-9515-bf3a0c1d5f61] Running
	I1013 23:16:59.736617  624746 system_pods.go:89] "kube-apiserver-embed-certs-505482" [8fb6166d-fbdc-4c34-a991-aa6cd95a5c29] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:16:59.736627  624746 system_pods.go:89] "kube-controller-manager-embed-certs-505482" [efb3e995-198d-4431-b436-c6b12435318d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:16:59.736637  624746 system_pods.go:89] "kube-proxy-n2g5d" [efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d] Running
	I1013 23:16:59.736645  624746 system_pods.go:89] "kube-scheduler-embed-certs-505482" [41e6ad1a-48c8-43af-88b0-e9bae19f3cd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:16:59.736654  624746 system_pods.go:89] "storage-provisioner" [7c85e3a2-d20e-48ef-84ef-980fe6e2d40e] Running
	I1013 23:16:59.736662  624746 system_pods.go:126] duration metric: took 3.438462ms to wait for k8s-apps to be running ...
	I1013 23:16:59.736674  624746 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:16:59.736734  624746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:16:59.751289  624746 system_svc.go:56] duration metric: took 14.604121ms WaitForService to wait for kubelet
	I1013 23:16:59.751324  624746 kubeadm.go:586] duration metric: took 7.305015245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:16:59.751345  624746 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:16:59.757327  624746 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:16:59.757371  624746 node_conditions.go:123] node cpu capacity is 2
	I1013 23:16:59.757386  624746 node_conditions.go:105] duration metric: took 6.036063ms to run NodePressure ...
	I1013 23:16:59.757399  624746 start.go:241] waiting for startup goroutines ...
	I1013 23:16:59.757406  624746 start.go:246] waiting for cluster config update ...
	I1013 23:16:59.757418  624746 start.go:255] writing updated cluster config ...
	I1013 23:16:59.757728  624746 ssh_runner.go:195] Run: rm -f paused
	I1013 23:16:59.762066  624746 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:16:59.767003  624746 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6rtz5" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 23:17:01.772998  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:03.778134  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:06.273229  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:08.273482  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:10.773195  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 13 23:16:48 no-preload-985461 crio[647]: time="2025-10-13T23:16:48.681552781Z" level=info msg="Removed container d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4/dashboard-metrics-scraper" id=bcd63e16-09b5-4f9c-8684-242879ea342f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 23:16:52 no-preload-985461 conmon[1114]: conmon 8323fd99c8ddeb0e49de <ninfo>: container 1118 exited with status 1
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.675306612Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6b162912-9003-42f5-9734-baee9f01149e name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.676729517Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8fc03920-9bf4-44bf-b5e6-d5a0cdfb27cd name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.678344147Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=aad009b7-ba31-4f13-84a0-782c59d03b5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.678593061Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.683391939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.683597654Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9dab15d7d4b35b88df67d274b26a842bf7110d67284abb27d44a555757e10349/merged/etc/passwd: no such file or directory"
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.683635914Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9dab15d7d4b35b88df67d274b26a842bf7110d67284abb27d44a555757e10349/merged/etc/group: no such file or directory"
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.683937602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.74493238Z" level=info msg="Created container 0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893: kube-system/storage-provisioner/storage-provisioner" id=aad009b7-ba31-4f13-84a0-782c59d03b5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.747414308Z" level=info msg="Starting container: 0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893" id=f8070729-4736-415f-8945-63ca95ff3eb6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.756866319Z" level=info msg="Started container" PID=1622 containerID=0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893 description=kube-system/storage-provisioner/storage-provisioner id=f8070729-4736-415f-8945-63ca95ff3eb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31aa621d8b16538888922a52c565c47d0051ea1f66e9b8c1efce8f9374e7b762
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.307576989Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.312892681Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.312948278Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.312977742Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.319974754Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.320144285Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.320229461Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.327052126Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.32770712Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.327801452Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.333425462Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.333618861Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0db2d59931d13       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           20 seconds ago      Running             storage-provisioner         2                   31aa621d8b165       storage-provisioner                          kube-system
	a981ae5b4f200       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   1c008cfe54b9a       dashboard-metrics-scraper-6ffb444bf9-x9rr4   kubernetes-dashboard
	5f46110128fb8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   26 seconds ago      Running             kubernetes-dashboard        0                   18f58d81f5293       kubernetes-dashboard-855c9754f9-xr9sp        kubernetes-dashboard
	724b420b38a8e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   f4bf9f8d9df3c       coredns-66bc5c9577-qz7kw                     kube-system
	875b00f702502       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   8f96978202ba8       busybox                                      default
	479a4c6a54e2e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   c7502ccae1d7e       kube-proxy-24vhq                             kube-system
	dd48b184df6b1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   3da5924a0a7d6       kindnet-ljpdl                                kube-system
	8323fd99c8dde       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago      Exited              storage-provisioner         1                   31aa621d8b165       storage-provisioner                          kube-system
	ca197e5ccabad       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   efd4b36a2975f       etcd-no-preload-985461                       kube-system
	e6dcc041a964a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   ebaa76727bb26       kube-scheduler-no-preload-985461             kube-system
	ad4b2abb5a0c0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   46bad781ac1db       kube-apiserver-no-preload-985461             kube-system
	94ced949d329c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   12f9c71e2925a       kube-controller-manager-no-preload-985461    kube-system
	
	
	==> coredns [724b420b38a8e6b1d29e11c05f3913668233120711d3308275301eca8aaa8fd5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46308 - 4391 "HINFO IN 7731610983691727930.1757820391371373112. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025507277s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-985461
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-985461
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=no-preload-985461
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_15_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:15:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-985461
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:17:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:16:52 +0000   Mon, 13 Oct 2025 23:15:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:16:52 +0000   Mon, 13 Oct 2025 23:15:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:16:52 +0000   Mon, 13 Oct 2025 23:15:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:16:52 +0000   Mon, 13 Oct 2025 23:15:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-985461
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                c81637a3-d3d8-45df-8334-a3fb5c4d8e37
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-qz7kw                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     117s
	  kube-system                 etcd-no-preload-985461                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-ljpdl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-no-preload-985461              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-no-preload-985461     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-24vhq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-no-preload-985461              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-x9rr4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xr9sp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 116s                   kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node no-preload-985461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node no-preload-985461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m15s (x8 over 2m15s)  kubelet          Node no-preload-985461 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m3s                   kubelet          Node no-preload-985461 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m3s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m3s                   kubelet          Node no-preload-985461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m3s                   kubelet          Node no-preload-985461 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           118s                   node-controller  Node no-preload-985461 event: Registered Node no-preload-985461 in Controller
	  Normal   NodeReady                93s                    kubelet          Node no-preload-985461 status is now: NodeReady
	  Normal   Starting                 59s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node no-preload-985461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node no-preload-985461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node no-preload-985461 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node no-preload-985461 event: Registered Node no-preload-985461 in Controller
	
	
	==> dmesg <==
	[Oct13 22:53] overlayfs: idmapped layers are currently not supported
	[Oct13 22:54] overlayfs: idmapped layers are currently not supported
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	[Oct13 23:16] overlayfs: idmapped layers are currently not supported
	[ +36.293322] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ca197e5ccabade51478fd7728ee7b5ca28a1bdcb05fde64e5acff9535fc178cc] <==
	{"level":"warn","ts":"2025-10-13T23:16:19.495296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.539275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.578040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.612073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.632088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.669106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.742239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.780196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.835511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.877155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.922385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.962373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.031437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.053270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.076799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.108062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.134676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.166452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.254506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.275568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.316174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.340989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.363655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.396707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.464668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35316","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:17:14 up  2:59,  0 user,  load average: 4.14, 3.45, 2.73
	Linux no-preload-985461 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dd48b184df6b143bf67e927d5aded7eb332ee9943358347aa34f17b9d3a0e99a] <==
	I1013 23:16:23.045253       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:16:23.103595       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:16:23.103828       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:16:23.103894       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:16:23.103945       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:16:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:16:23.304508       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:16:23.304588       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:16:23.304622       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:16:23.305330       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:16:53.305232       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 23:16:53.305357       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 23:16:53.305471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:16:53.305547       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1013 23:16:54.705026       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:16:54.705100       1 metrics.go:72] Registering metrics
	I1013 23:16:54.705168       1 controller.go:711] "Syncing nftables rules"
	I1013 23:17:03.307164       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:17:03.307293       1 main.go:301] handling current node
	I1013 23:17:13.311343       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:17:13.311394       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ad4b2abb5a0c03dab14186bfcfe871a8269efe62dea94aa86fb792c8533ea086] <==
	I1013 23:16:21.676845       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 23:16:21.677013       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 23:16:21.677193       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 23:16:21.693758       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 23:16:21.694572       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 23:16:21.694699       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 23:16:21.697695       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 23:16:21.697722       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 23:16:21.699325       1 aggregator.go:171] initial CRD sync complete...
	I1013 23:16:21.699334       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 23:16:21.699338       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 23:16:21.699344       1 cache.go:39] Caches are synced for autoregister controller
	I1013 23:16:21.698854       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 23:16:21.729556       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:16:22.207584       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 23:16:22.255512       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:16:22.315703       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:16:22.387779       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:16:22.405105       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:16:22.416104       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:16:22.610545       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.123.223"}
	I1013 23:16:22.657182       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.131.66"}
	I1013 23:16:25.068227       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:16:25.268499       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:16:25.316623       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [94ced949d329ca42c57c0dcc0ab094d100a77886a09898107cad3e81fce3ff81] <==
	I1013 23:16:24.871522       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 23:16:24.871601       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 23:16:24.872626       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:16:24.875402       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:16:24.899766       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 23:16:24.902107       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 23:16:24.906366       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 23:16:24.907614       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:16:24.908687       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 23:16:24.910041       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 23:16:24.910104       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:16:24.910290       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 23:16:24.910342       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 23:16:24.910396       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:16:24.910696       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 23:16:24.910946       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 23:16:24.911018       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 23:16:24.911136       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-985461"
	I1013 23:16:24.911197       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 23:16:24.911251       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 23:16:24.911183       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 23:16:24.912331       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 23:16:24.912377       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:16:24.917963       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 23:16:24.928208       1 shared_informer.go:356] "Caches are synced" controller="service account"
	
	
	==> kube-proxy [479a4c6a54e2e68afddb1aa673dc26a32c4ad999c480d6344380a6d38afa6fce] <==
	I1013 23:16:23.061143       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:16:23.164925       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:16:23.265928       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:16:23.265962       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 23:16:23.266046       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:16:23.283840       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:16:23.283889       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:16:23.287176       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:16:23.287564       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:16:23.287624       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:16:23.291744       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:16:23.291818       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:16:23.291918       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:16:23.291968       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:16:23.292156       1 config.go:200] "Starting service config controller"
	I1013 23:16:23.292203       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:16:23.292597       1 config.go:309] "Starting node config controller"
	I1013 23:16:23.297909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:16:23.297938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:16:23.392165       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 23:16:23.392169       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:16:23.393324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e6dcc041a964a5908141d63a7f42e4506831ad6091f46c16ae0d0d31a11158dd] <==
	I1013 23:16:19.565436       1 serving.go:386] Generated self-signed cert in-memory
	W1013 23:16:21.448230       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 23:16:21.448271       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 23:16:21.448282       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 23:16:21.448289       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 23:16:21.605601       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 23:16:21.605696       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:16:21.608444       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:16:21.614953       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 23:16:21.615024       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:16:21.615330       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:16:21.717187       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:16:25 no-preload-985461 kubelet[764]: I1013 23:16:25.535228     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnpw2\" (UniqueName: \"kubernetes.io/projected/22d3739f-30bb-4e05-8339-7f1c5f1519af-kube-api-access-qnpw2\") pod \"dashboard-metrics-scraper-6ffb444bf9-x9rr4\" (UID: \"22d3739f-30bb-4e05-8339-7f1c5f1519af\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4"
	Oct 13 23:16:25 no-preload-985461 kubelet[764]: W1013 23:16:25.739754     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/crio-1c008cfe54b9ab83eec1cf7dad76e0656e486b609793c09da10f20999b89336a WatchSource:0}: Error finding container 1c008cfe54b9ab83eec1cf7dad76e0656e486b609793c09da10f20999b89336a: Status 404 returned error can't find the container with id 1c008cfe54b9ab83eec1cf7dad76e0656e486b609793c09da10f20999b89336a
	Oct 13 23:16:25 no-preload-985461 kubelet[764]: W1013 23:16:25.763457     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/crio-18f58d81f5293888ccece311303cecf3fa54f709b51e52b1312ed0b322b5b634 WatchSource:0}: Error finding container 18f58d81f5293888ccece311303cecf3fa54f709b51e52b1312ed0b322b5b634: Status 404 returned error can't find the container with id 18f58d81f5293888ccece311303cecf3fa54f709b51e52b1312ed0b322b5b634
	Oct 13 23:16:26 no-preload-985461 kubelet[764]: I1013 23:16:26.903596     764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 23:16:32 no-preload-985461 kubelet[764]: I1013 23:16:32.596517     764 scope.go:117] "RemoveContainer" containerID="7befa0c1e386d5ecf0d505a8ffc6ebf0c089bd839553bc07d5333c5fd06abd75"
	Oct 13 23:16:33 no-preload-985461 kubelet[764]: I1013 23:16:33.601461     764 scope.go:117] "RemoveContainer" containerID="7befa0c1e386d5ecf0d505a8ffc6ebf0c089bd839553bc07d5333c5fd06abd75"
	Oct 13 23:16:33 no-preload-985461 kubelet[764]: I1013 23:16:33.601614     764 scope.go:117] "RemoveContainer" containerID="d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d"
	Oct 13 23:16:33 no-preload-985461 kubelet[764]: E1013 23:16:33.601767     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:16:34 no-preload-985461 kubelet[764]: I1013 23:16:34.605678     764 scope.go:117] "RemoveContainer" containerID="d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d"
	Oct 13 23:16:34 no-preload-985461 kubelet[764]: E1013 23:16:34.605838     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:16:35 no-preload-985461 kubelet[764]: I1013 23:16:35.714192     764 scope.go:117] "RemoveContainer" containerID="d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d"
	Oct 13 23:16:35 no-preload-985461 kubelet[764]: E1013 23:16:35.714384     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:16:48 no-preload-985461 kubelet[764]: I1013 23:16:48.452362     764 scope.go:117] "RemoveContainer" containerID="d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d"
	Oct 13 23:16:48 no-preload-985461 kubelet[764]: I1013 23:16:48.659382     764 scope.go:117] "RemoveContainer" containerID="d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d"
	Oct 13 23:16:48 no-preload-985461 kubelet[764]: I1013 23:16:48.659938     764 scope.go:117] "RemoveContainer" containerID="a981ae5b4f20098f6e1818b72d9c111968a396c9fa165c85bbee0a671f77046f"
	Oct 13 23:16:48 no-preload-985461 kubelet[764]: E1013 23:16:48.660562     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:16:48 no-preload-985461 kubelet[764]: I1013 23:16:48.744102     764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xr9sp" podStartSLOduration=1.522882117 podStartE2EDuration="23.744077899s" podCreationTimestamp="2025-10-13 23:16:25 +0000 UTC" firstStartedPulling="2025-10-13 23:16:25.766169784 +0000 UTC m=+10.585145053" lastFinishedPulling="2025-10-13 23:16:47.987365558 +0000 UTC m=+32.806340835" observedRunningTime="2025-10-13 23:16:48.687025393 +0000 UTC m=+33.506000679" watchObservedRunningTime="2025-10-13 23:16:48.744077899 +0000 UTC m=+33.563053185"
	Oct 13 23:16:53 no-preload-985461 kubelet[764]: I1013 23:16:53.674155     764 scope.go:117] "RemoveContainer" containerID="8323fd99c8ddeb0e49de9a6be3e47d906e010bcdf3332b71881843c6b8fea91d"
	Oct 13 23:16:55 no-preload-985461 kubelet[764]: I1013 23:16:55.713932     764 scope.go:117] "RemoveContainer" containerID="a981ae5b4f20098f6e1818b72d9c111968a396c9fa165c85bbee0a671f77046f"
	Oct 13 23:16:55 no-preload-985461 kubelet[764]: E1013 23:16:55.714551     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:17:07 no-preload-985461 kubelet[764]: I1013 23:17:07.452201     764 scope.go:117] "RemoveContainer" containerID="a981ae5b4f20098f6e1818b72d9c111968a396c9fa165c85bbee0a671f77046f"
	Oct 13 23:17:07 no-preload-985461 kubelet[764]: E1013 23:17:07.452483     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:17:11 no-preload-985461 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:17:11 no-preload-985461 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:17:11 no-preload-985461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5f46110128fb81d270b1cec6e2b2f4f4bf290629ba0e722e52328c484d8606b7] <==
	2025/10/13 23:16:48 Using namespace: kubernetes-dashboard
	2025/10/13 23:16:48 Using in-cluster config to connect to apiserver
	2025/10/13 23:16:48 Using secret token for csrf signing
	2025/10/13 23:16:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 23:16:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 23:16:48 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 23:16:48 Generating JWE encryption key
	2025/10/13 23:16:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 23:16:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 23:16:49 Initializing JWE encryption key from synchronized object
	2025/10/13 23:16:49 Creating in-cluster Sidecar client
	2025/10/13 23:16:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 23:16:49 Serving insecurely on HTTP port: 9090
	2025/10/13 23:16:48 Starting overwatch
	
	
	==> storage-provisioner [0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893] <==
	I1013 23:16:53.789851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 23:16:53.816589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 23:16:53.816662       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 23:16:53.820862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:57.276634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:01.536961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:05.136283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:08.190550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:11.213401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:11.221374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:17:11.221606       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 23:17:11.224136       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-985461_45d41c12-7526-46cc-b79b-cdab59c08b7a!
	W1013 23:17:11.227261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:17:11.228169       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6dee9d5f-8952-4fb3-ad36-2f1171378517", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-985461_45d41c12-7526-46cc-b79b-cdab59c08b7a became leader
	W1013 23:17:11.232820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:17:11.325206       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-985461_45d41c12-7526-46cc-b79b-cdab59c08b7a!
	W1013 23:17:13.245340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:13.253399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:15.258270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:15.266517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8323fd99c8ddeb0e49de9a6be3e47d906e010bcdf3332b71881843c6b8fea91d] <==
	I1013 23:16:22.972478       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 23:16:52.974452       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-985461 -n no-preload-985461
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-985461 -n no-preload-985461: exit status 2 (421.747116ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-985461 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-985461
helpers_test.go:243: (dbg) docker inspect no-preload-985461:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad",
	        "Created": "2025-10-13T23:14:18.084587368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 622005,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:16:08.216214039Z",
	            "FinishedAt": "2025-10-13T23:16:07.36824999Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/hostname",
	        "HostsPath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/hosts",
	        "LogPath": "/var/lib/docker/containers/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad-json.log",
	        "Name": "/no-preload-985461",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-985461:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-985461",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad",
	                "LowerDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e79aca0e3fcc4ff6112be523895504ca94d32af1e2e04ec6e2cb7138f7b0974e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-985461",
	                "Source": "/var/lib/docker/volumes/no-preload-985461/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-985461",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-985461",
	                "name.minikube.sigs.k8s.io": "no-preload-985461",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "450f141f5032548cab588386fd6dc762e18fb90a52718ff1d514a19f128e9860",
	            "SandboxKey": "/var/run/docker/netns/450f141f5032",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-985461": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:01:4f:ec:61:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d2b0b0019112f54c353afbc5f1c7d7acc1a1a4608af0cb49812ab4cf98cbb0b9",
	                    "EndpointID": "b7afd0ca1770e74a4e8d26e227f3e40722382d84eb0973091125417b010ed978",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-985461",
	                        "24722b872d75"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-985461 -n no-preload-985461
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-985461 -n no-preload-985461: exit status 2 (416.69176ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-985461 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-985461 logs -n 25: (1.60774749s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-051941 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-051941    │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ delete  │ -p cert-options-051941                                                                                                                                                                                                                        │ cert-options-051941    │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:11 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:11 UTC │ 13 Oct 25 23:12 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │                     │
	│ stop    │ -p old-k8s-version-670275 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:12 UTC │ 13 Oct 25 23:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ image   │ old-k8s-version-670275 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ pause   │ -p old-k8s-version-670275 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │                     │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:15 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-896873 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p cert-expiration-896873                                                                                                                                                                                                                     │ cert-expiration-896873 │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482     │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-985461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │                     │
	│ stop    │ -p no-preload-985461 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p no-preload-985461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-505482     │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	│ stop    │ -p embed-certs-505482 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-505482     │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-505482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-505482     │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482     │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	│ image   │ no-preload-985461 image list --format=json                                                                                                                                                                                                    │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p no-preload-985461 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-985461      │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:16:42
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:16:42.964614  624746 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:16:42.964804  624746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:16:42.964818  624746 out.go:374] Setting ErrFile to fd 2...
	I1013 23:16:42.964824  624746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:16:42.965135  624746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:16:42.965618  624746 out.go:368] Setting JSON to false
	I1013 23:16:42.966698  624746 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10739,"bootTime":1760386664,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:16:42.966771  624746 start.go:141] virtualization:  
	I1013 23:16:42.970372  624746 out.go:179] * [embed-certs-505482] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:16:42.973420  624746 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:16:42.973582  624746 notify.go:220] Checking for updates...
	I1013 23:16:42.976734  624746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:16:42.979743  624746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:16:42.982722  624746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:16:42.985720  624746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:16:42.988697  624746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:16:42.992170  624746 config.go:182] Loaded profile config "embed-certs-505482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:16:42.992824  624746 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:16:43.025996  624746 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:16:43.026121  624746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:16:43.101448  624746 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:16:43.089909136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:16:43.101571  624746 docker.go:318] overlay module found
	I1013 23:16:43.104942  624746 out.go:179] * Using the docker driver based on existing profile
	I1013 23:16:43.108368  624746 start.go:305] selected driver: docker
	I1013 23:16:43.108389  624746 start.go:925] validating driver "docker" against &{Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:16:43.108578  624746 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:16:43.109602  624746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:16:43.200526  624746 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:16:43.189361347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:16:43.200881  624746 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:16:43.200912  624746 cni.go:84] Creating CNI manager for ""
	I1013 23:16:43.200965  624746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:16:43.200997  624746 start.go:349] cluster config:
	{Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:16:43.204431  624746 out.go:179] * Starting "embed-certs-505482" primary control-plane node in "embed-certs-505482" cluster
	I1013 23:16:43.207434  624746 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:16:43.210696  624746 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:16:43.213584  624746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:16:43.213643  624746 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:16:43.213656  624746 cache.go:58] Caching tarball of preloaded images
	I1013 23:16:43.213679  624746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:16:43.213760  624746 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:16:43.213771  624746 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:16:43.213879  624746 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/config.json ...
	I1013 23:16:43.234329  624746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:16:43.234347  624746 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:16:43.234366  624746 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:16:43.234388  624746 start.go:360] acquireMachinesLock for embed-certs-505482: {Name:mk60574f1c53ab31d166b72e157fd38e1fef9702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:16:43.234437  624746 start.go:364] duration metric: took 32.951µs to acquireMachinesLock for "embed-certs-505482"
	I1013 23:16:43.234456  624746 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:16:43.234467  624746 fix.go:54] fixHost starting: 
	I1013 23:16:43.234884  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:43.255803  624746 fix.go:112] recreateIfNeeded on embed-certs-505482: state=Stopped err=<nil>
	W1013 23:16:43.255836  624746 fix.go:138] unexpected machine state, will restart: <nil>
	W1013 23:16:44.856516  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	W1013 23:16:46.856924  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	I1013 23:16:43.259350  624746 out.go:252] * Restarting existing docker container for "embed-certs-505482" ...
	I1013 23:16:43.259449  624746 cli_runner.go:164] Run: docker start embed-certs-505482
	I1013 23:16:43.642295  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:43.667322  624746 kic.go:430] container "embed-certs-505482" state is running.
	I1013 23:16:43.667776  624746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-505482
	I1013 23:16:43.697538  624746 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/config.json ...
	I1013 23:16:43.698092  624746 machine.go:93] provisionDockerMachine start ...
	I1013 23:16:43.698163  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:43.730384  624746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:43.731003  624746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1013 23:16:43.731020  624746 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:16:43.732096  624746 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 23:16:46.887033  624746 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-505482
	
	I1013 23:16:46.887055  624746 ubuntu.go:182] provisioning hostname "embed-certs-505482"
	I1013 23:16:46.887149  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:46.908795  624746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:46.909101  624746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1013 23:16:46.909122  624746 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-505482 && echo "embed-certs-505482" | sudo tee /etc/hostname
	I1013 23:16:47.086159  624746 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-505482
	
	I1013 23:16:47.086303  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:47.115034  624746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:47.115498  624746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1013 23:16:47.115519  624746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-505482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-505482/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-505482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:16:47.271490  624746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:16:47.271519  624746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:16:47.271539  624746 ubuntu.go:190] setting up certificates
	I1013 23:16:47.271550  624746 provision.go:84] configureAuth start
	I1013 23:16:47.271613  624746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-505482
	I1013 23:16:47.294143  624746 provision.go:143] copyHostCerts
	I1013 23:16:47.294208  624746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:16:47.294225  624746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:16:47.294297  624746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:16:47.294389  624746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:16:47.294403  624746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:16:47.294431  624746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:16:47.294490  624746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:16:47.294500  624746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:16:47.294531  624746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:16:47.294590  624746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.embed-certs-505482 san=[127.0.0.1 192.168.76.2 embed-certs-505482 localhost minikube]
	I1013 23:16:48.394872  624746 provision.go:177] copyRemoteCerts
	I1013 23:16:48.394986  624746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:16:48.395059  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:48.413854  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:48.530000  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:16:48.561559  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 23:16:48.590752  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:16:48.620041  624746 provision.go:87] duration metric: took 1.34845917s to configureAuth
	I1013 23:16:48.620127  624746 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:16:48.620388  624746 config.go:182] Loaded profile config "embed-certs-505482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:16:48.620607  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:48.643621  624746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:16:48.644078  624746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1013 23:16:48.644107  624746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:16:49.047381  624746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:16:49.047447  624746 machine.go:96] duration metric: took 5.349338746s to provisionDockerMachine
	I1013 23:16:49.047475  624746 start.go:293] postStartSetup for "embed-certs-505482" (driver="docker")
	I1013 23:16:49.047500  624746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:16:49.047596  624746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:16:49.047664  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:49.084740  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:49.200770  624746 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:16:49.204833  624746 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:16:49.204905  624746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:16:49.204931  624746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:16:49.205016  624746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:16:49.205143  624746 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:16:49.205301  624746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:16:49.214064  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:16:49.234811  624746 start.go:296] duration metric: took 187.291282ms for postStartSetup
	I1013 23:16:49.234937  624746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:16:49.235016  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:49.260160  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:49.369499  624746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:16:49.375030  624746 fix.go:56] duration metric: took 6.140553684s for fixHost
	I1013 23:16:49.375056  624746 start.go:83] releasing machines lock for "embed-certs-505482", held for 6.140609839s
	I1013 23:16:49.375167  624746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-505482
	I1013 23:16:49.400842  624746 ssh_runner.go:195] Run: cat /version.json
	I1013 23:16:49.400898  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:49.400906  624746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:16:49.400986  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:49.433722  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:49.447335  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:49.681552  624746 ssh_runner.go:195] Run: systemctl --version
	I1013 23:16:49.688810  624746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:16:49.730899  624746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:16:49.735592  624746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:16:49.735692  624746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:16:49.743887  624746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:16:49.743910  624746 start.go:495] detecting cgroup driver to use...
	I1013 23:16:49.743941  624746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:16:49.743993  624746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:16:49.760068  624746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:16:49.774185  624746 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:16:49.774250  624746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:16:49.790296  624746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:16:49.804016  624746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:16:49.925591  624746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:16:50.037869  624746 docker.go:234] disabling docker service ...
	I1013 23:16:50.037998  624746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:16:50.054594  624746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:16:50.068537  624746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:16:50.190446  624746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:16:50.307013  624746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:16:50.320550  624746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:16:50.335368  624746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:16:50.335463  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.344921  624746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:16:50.345004  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.356772  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.367797  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.378056  624746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:16:50.387638  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.398398  624746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.408206  624746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:16:50.417060  624746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:16:50.424736  624746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:16:50.431803  624746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:16:50.543667  624746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:16:51.044843  624746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:16:51.044913  624746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:16:51.048822  624746 start.go:563] Will wait 60s for crictl version
	I1013 23:16:51.048895  624746 ssh_runner.go:195] Run: which crictl
	I1013 23:16:51.052693  624746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:16:51.078417  624746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:16:51.078500  624746 ssh_runner.go:195] Run: crio --version
	I1013 23:16:51.109148  624746 ssh_runner.go:195] Run: crio --version
	I1013 23:16:51.146988  624746 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:16:51.150408  624746 cli_runner.go:164] Run: docker network inspect embed-certs-505482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:16:51.167902  624746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 23:16:51.172125  624746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:16:51.182266  624746 kubeadm.go:883] updating cluster {Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:16:51.182391  624746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:16:51.182451  624746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:16:51.215495  624746 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:16:51.215519  624746 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:16:51.215579  624746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:16:51.245229  624746 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:16:51.245257  624746 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:16:51.245266  624746 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 23:16:51.245376  624746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-505482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:16:51.245474  624746 ssh_runner.go:195] Run: crio config
	I1013 23:16:51.319637  624746 cni.go:84] Creating CNI manager for ""
	I1013 23:16:51.319661  624746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:16:51.319726  624746 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:16:51.319764  624746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-505482 NodeName:embed-certs-505482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:16:51.319913  624746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-505482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:16:51.320016  624746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:16:51.328652  624746 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:16:51.328741  624746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:16:51.337002  624746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1013 23:16:51.349939  624746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:16:51.365473  624746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1013 23:16:51.378367  624746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:16:51.382132  624746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:16:51.391932  624746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:16:51.505640  624746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:16:51.522792  624746 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482 for IP: 192.168.76.2
	I1013 23:16:51.522863  624746 certs.go:195] generating shared ca certs ...
	I1013 23:16:51.522897  624746 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:51.523153  624746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:16:51.523238  624746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:16:51.523285  624746 certs.go:257] generating profile certs ...
	I1013 23:16:51.523424  624746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/client.key
	I1013 23:16:51.523523  624746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.key.049067f4
	I1013 23:16:51.523591  624746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.key
	I1013 23:16:51.523732  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:16:51.523809  624746 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:16:51.523846  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:16:51.523901  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:16:51.523965  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:16:51.524023  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:16:51.524097  624746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:16:51.524849  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:16:51.554072  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:16:51.590057  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:16:51.614436  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:16:51.633981  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1013 23:16:51.656587  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 23:16:51.679490  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:16:51.709263  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/embed-certs-505482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 23:16:51.733073  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:16:51.759383  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:16:51.779151  624746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:16:51.799675  624746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:16:51.813671  624746 ssh_runner.go:195] Run: openssl version
	I1013 23:16:51.820371  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:16:51.830177  624746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:16:51.833900  624746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:16:51.833973  624746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:16:51.880657  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:16:51.889000  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:16:51.900115  624746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:16:51.904023  624746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:16:51.904139  624746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:16:51.945166  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:16:51.953721  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:16:51.962393  624746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:16:51.966127  624746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:16:51.966205  624746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:16:52.008251  624746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:16:52.017077  624746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:16:52.021602  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:16:52.063647  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:16:52.114709  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:16:52.156188  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:16:52.198432  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:16:52.242999  624746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:16:52.301359  624746 kubeadm.go:400] StartCluster: {Name:embed-certs-505482 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-505482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:16:52.301453  624746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:16:52.301522  624746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:16:52.363843  624746 cri.go:89] found id: "571a3921ae313b746dc750f163cd023508f28ff3bf97977e5f8f7faab03157e7"
	I1013 23:16:52.363871  624746 cri.go:89] found id: "dd86b0b8cf2e77ea5e9fb894aa6375e33bcdad7cd483eb155b4e5002125e49b7"
	I1013 23:16:52.363877  624746 cri.go:89] found id: ""
	I1013 23:16:52.363924  624746 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 23:16:52.384347  624746 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:16:52Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:16:52.384518  624746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:16:52.417755  624746 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:16:52.417816  624746 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:16:52.417899  624746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:16:52.432936  624746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:16:52.433567  624746 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-505482" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:16:52.433866  624746 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-428797/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-505482" cluster setting kubeconfig missing "embed-certs-505482" context setting]
	I1013 23:16:52.434397  624746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:52.436184  624746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:16:52.444493  624746 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1013 23:16:52.444566  624746 kubeadm.go:601] duration metric: took 26.729802ms to restartPrimaryControlPlane
	I1013 23:16:52.444591  624746 kubeadm.go:402] duration metric: took 143.242175ms to StartCluster
	I1013 23:16:52.444635  624746 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:52.444711  624746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:16:52.445961  624746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:16:52.446247  624746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:16:52.446718  624746 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:16:52.446786  624746 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-505482"
	I1013 23:16:52.446801  624746 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-505482"
	W1013 23:16:52.446807  624746 addons.go:247] addon storage-provisioner should already be in state true
	I1013 23:16:52.446830  624746 host.go:66] Checking if "embed-certs-505482" exists ...
	I1013 23:16:52.447240  624746 config.go:182] Loaded profile config "embed-certs-505482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:16:52.447640  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:52.447811  624746 addons.go:69] Setting dashboard=true in profile "embed-certs-505482"
	I1013 23:16:52.447847  624746 addons.go:238] Setting addon dashboard=true in "embed-certs-505482"
	W1013 23:16:52.447869  624746 addons.go:247] addon dashboard should already be in state true
	I1013 23:16:52.447920  624746 host.go:66] Checking if "embed-certs-505482" exists ...
	I1013 23:16:52.448416  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:52.452924  624746 out.go:179] * Verifying Kubernetes components...
	I1013 23:16:52.448551  624746 addons.go:69] Setting default-storageclass=true in profile "embed-certs-505482"
	I1013 23:16:52.454250  624746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-505482"
	I1013 23:16:52.454569  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:52.457614  624746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:16:52.521158  624746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:16:52.524772  624746 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 23:16:52.524947  624746 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:16:52.524970  624746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:16:52.525038  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:52.530141  624746 addons.go:238] Setting addon default-storageclass=true in "embed-certs-505482"
	W1013 23:16:52.530165  624746 addons.go:247] addon default-storageclass should already be in state true
	I1013 23:16:52.530189  624746 host.go:66] Checking if "embed-certs-505482" exists ...
	I1013 23:16:52.530606  624746 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:16:52.530724  624746 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1013 23:16:48.857024  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	W1013 23:16:51.356633  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	I1013 23:16:52.533714  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 23:16:52.533740  624746 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 23:16:52.533808  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:52.574271  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:52.576547  624746 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:16:52.576567  624746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:16:52.576628  624746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:16:52.590329  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:52.619542  624746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:16:52.836432  624746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:16:52.864747  624746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:16:52.888519  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 23:16:52.888586  624746 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 23:16:52.900599  624746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1013 23:16:53.360006  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	W1013 23:16:55.855604  621879 pod_ready.go:104] pod "coredns-66bc5c9577-qz7kw" is not "Ready", error: <nil>
	I1013 23:16:57.356594  621879 pod_ready.go:94] pod "coredns-66bc5c9577-qz7kw" is "Ready"
	I1013 23:16:57.356676  621879 pod_ready.go:86] duration metric: took 34.006287602s for pod "coredns-66bc5c9577-qz7kw" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.359414  621879 pod_ready.go:83] waiting for pod "etcd-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.363399  621879 pod_ready.go:94] pod "etcd-no-preload-985461" is "Ready"
	I1013 23:16:57.363474  621879 pod_ready.go:86] duration metric: took 4.036834ms for pod "etcd-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.365563  621879 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.369891  621879 pod_ready.go:94] pod "kube-apiserver-no-preload-985461" is "Ready"
	I1013 23:16:57.369963  621879 pod_ready.go:86] duration metric: took 4.33286ms for pod "kube-apiserver-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.372176  621879 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.554273  621879 pod_ready.go:94] pod "kube-controller-manager-no-preload-985461" is "Ready"
	I1013 23:16:57.554354  621879 pod_ready.go:86] duration metric: took 182.107947ms for pod "kube-controller-manager-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:57.754688  621879 pod_ready.go:83] waiting for pod "kube-proxy-24vhq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:52.979116  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 23:16:52.979196  624746 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 23:16:53.088322  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 23:16:53.088396  624746 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 23:16:53.162089  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 23:16:53.162163  624746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 23:16:53.182601  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 23:16:53.182678  624746 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 23:16:53.207593  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 23:16:53.207669  624746 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 23:16:53.240878  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 23:16:53.240952  624746 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 23:16:53.256695  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 23:16:53.256771  624746 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 23:16:53.270064  624746 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:16:53.270136  624746 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 23:16:53.295328  624746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:16:58.154739  621879 pod_ready.go:94] pod "kube-proxy-24vhq" is "Ready"
	I1013 23:16:58.154762  621879 pod_ready.go:86] duration metric: took 400.000111ms for pod "kube-proxy-24vhq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:58.355291  621879 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:58.754918  621879 pod_ready.go:94] pod "kube-scheduler-no-preload-985461" is "Ready"
	I1013 23:16:58.754944  621879 pod_ready.go:86] duration metric: took 399.629526ms for pod "kube-scheduler-no-preload-985461" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:16:58.754957  621879 pod_ready.go:40] duration metric: took 35.408313176s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:16:58.860401  621879 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:16:58.863397  621879 out.go:179] * Done! kubectl is now configured to use "no-preload-985461" cluster and "default" namespace by default
	I1013 23:16:59.060047  624746 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.195225949s)
	I1013 23:16:59.060092  624746 node_ready.go:35] waiting up to 6m0s for node "embed-certs-505482" to be "Ready" ...
	I1013 23:16:59.060431  624746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.159764541s)
	I1013 23:16:59.061531  624746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.225013445s)
	I1013 23:16:59.119238  624746 node_ready.go:49] node "embed-certs-505482" is "Ready"
	I1013 23:16:59.119274  624746 node_ready.go:38] duration metric: took 59.159493ms for node "embed-certs-505482" to be "Ready" ...
	I1013 23:16:59.119289  624746 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:16:59.119348  624746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:16:59.216051  624746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.92063321s)
	I1013 23:16:59.216084  624746 api_server.go:72] duration metric: took 6.769780007s to wait for apiserver process to appear ...
	I1013 23:16:59.216098  624746 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:16:59.216132  624746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:16:59.219186  624746 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-505482 addons enable metrics-server
	
	I1013 23:16:59.222097  624746 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1013 23:16:59.225053  624746 addons.go:514] duration metric: took 6.77831956s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1013 23:16:59.234101  624746 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 23:16:59.234128  624746 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 23:16:59.716750  624746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:16:59.725824  624746 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 23:16:59.726977  624746 api_server.go:141] control plane version: v1.34.1
	I1013 23:16:59.727008  624746 api_server.go:131] duration metric: took 510.902459ms to wait for apiserver health ...
	I1013 23:16:59.727018  624746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:16:59.730458  624746 system_pods.go:59] 8 kube-system pods found
	I1013 23:16:59.730500  624746 system_pods.go:61] "coredns-66bc5c9577-6rtz5" [1a2091eb-00b5-46b1-8f85-225c56508322] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:16:59.730511  624746 system_pods.go:61] "etcd-embed-certs-505482" [4620d6b3-7695-45a4-88f6-9db5af3fa1a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:16:59.730520  624746 system_pods.go:61] "kindnet-zf5h8" [7567865c-bc2d-41f2-9515-bf3a0c1d5f61] Running
	I1013 23:16:59.730527  624746 system_pods.go:61] "kube-apiserver-embed-certs-505482" [8fb6166d-fbdc-4c34-a991-aa6cd95a5c29] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:16:59.730538  624746 system_pods.go:61] "kube-controller-manager-embed-certs-505482" [efb3e995-198d-4431-b436-c6b12435318d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:16:59.730544  624746 system_pods.go:61] "kube-proxy-n2g5d" [efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d] Running
	I1013 23:16:59.730550  624746 system_pods.go:61] "kube-scheduler-embed-certs-505482" [41e6ad1a-48c8-43af-88b0-e9bae19f3cd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:16:59.730555  624746 system_pods.go:61] "storage-provisioner" [7c85e3a2-d20e-48ef-84ef-980fe6e2d40e] Running
	I1013 23:16:59.730561  624746 system_pods.go:74] duration metric: took 3.537242ms to wait for pod list to return data ...
	I1013 23:16:59.730571  624746 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:16:59.733179  624746 default_sa.go:45] found service account: "default"
	I1013 23:16:59.733207  624746 default_sa.go:55] duration metric: took 2.626729ms for default service account to be created ...
	I1013 23:16:59.733218  624746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:16:59.736534  624746 system_pods.go:86] 8 kube-system pods found
	I1013 23:16:59.736581  624746 system_pods.go:89] "coredns-66bc5c9577-6rtz5" [1a2091eb-00b5-46b1-8f85-225c56508322] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:16:59.736598  624746 system_pods.go:89] "etcd-embed-certs-505482" [4620d6b3-7695-45a4-88f6-9db5af3fa1a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:16:59.736604  624746 system_pods.go:89] "kindnet-zf5h8" [7567865c-bc2d-41f2-9515-bf3a0c1d5f61] Running
	I1013 23:16:59.736617  624746 system_pods.go:89] "kube-apiserver-embed-certs-505482" [8fb6166d-fbdc-4c34-a991-aa6cd95a5c29] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:16:59.736627  624746 system_pods.go:89] "kube-controller-manager-embed-certs-505482" [efb3e995-198d-4431-b436-c6b12435318d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:16:59.736637  624746 system_pods.go:89] "kube-proxy-n2g5d" [efe0cfdc-21ae-46d8-9a5b-37af5b01cc3d] Running
	I1013 23:16:59.736645  624746 system_pods.go:89] "kube-scheduler-embed-certs-505482" [41e6ad1a-48c8-43af-88b0-e9bae19f3cd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:16:59.736654  624746 system_pods.go:89] "storage-provisioner" [7c85e3a2-d20e-48ef-84ef-980fe6e2d40e] Running
	I1013 23:16:59.736662  624746 system_pods.go:126] duration metric: took 3.438462ms to wait for k8s-apps to be running ...
	I1013 23:16:59.736674  624746 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:16:59.736734  624746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:16:59.751289  624746 system_svc.go:56] duration metric: took 14.604121ms WaitForService to wait for kubelet
	I1013 23:16:59.751324  624746 kubeadm.go:586] duration metric: took 7.305015245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:16:59.751345  624746 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:16:59.757327  624746 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:16:59.757371  624746 node_conditions.go:123] node cpu capacity is 2
	I1013 23:16:59.757386  624746 node_conditions.go:105] duration metric: took 6.036063ms to run NodePressure ...
	I1013 23:16:59.757399  624746 start.go:241] waiting for startup goroutines ...
	I1013 23:16:59.757406  624746 start.go:246] waiting for cluster config update ...
	I1013 23:16:59.757418  624746 start.go:255] writing updated cluster config ...
	I1013 23:16:59.757728  624746 ssh_runner.go:195] Run: rm -f paused
	I1013 23:16:59.762066  624746 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:16:59.767003  624746 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6rtz5" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 23:17:01.772998  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:03.778134  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:06.273229  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:08.273482  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:10.773195  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 13 23:16:48 no-preload-985461 crio[647]: time="2025-10-13T23:16:48.681552781Z" level=info msg="Removed container d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4/dashboard-metrics-scraper" id=bcd63e16-09b5-4f9c-8684-242879ea342f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 23:16:52 no-preload-985461 conmon[1114]: conmon 8323fd99c8ddeb0e49de <ninfo>: container 1118 exited with status 1
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.675306612Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6b162912-9003-42f5-9734-baee9f01149e name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.676729517Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8fc03920-9bf4-44bf-b5e6-d5a0cdfb27cd name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.678344147Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=aad009b7-ba31-4f13-84a0-782c59d03b5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.678593061Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.683391939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.683597654Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9dab15d7d4b35b88df67d274b26a842bf7110d67284abb27d44a555757e10349/merged/etc/passwd: no such file or directory"
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.683635914Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9dab15d7d4b35b88df67d274b26a842bf7110d67284abb27d44a555757e10349/merged/etc/group: no such file or directory"
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.683937602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.74493238Z" level=info msg="Created container 0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893: kube-system/storage-provisioner/storage-provisioner" id=aad009b7-ba31-4f13-84a0-782c59d03b5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.747414308Z" level=info msg="Starting container: 0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893" id=f8070729-4736-415f-8945-63ca95ff3eb6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:16:53 no-preload-985461 crio[647]: time="2025-10-13T23:16:53.756866319Z" level=info msg="Started container" PID=1622 containerID=0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893 description=kube-system/storage-provisioner/storage-provisioner id=f8070729-4736-415f-8945-63ca95ff3eb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31aa621d8b16538888922a52c565c47d0051ea1f66e9b8c1efce8f9374e7b762
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.307576989Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.312892681Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.312948278Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.312977742Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.319974754Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.320144285Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.320229461Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.327052126Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.32770712Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.327801452Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.333425462Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:03 no-preload-985461 crio[647]: time="2025-10-13T23:17:03.333618861Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0db2d59931d13       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           23 seconds ago       Running             storage-provisioner         2                   31aa621d8b165       storage-provisioner                          kube-system
	a981ae5b4f200       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   1c008cfe54b9a       dashboard-metrics-scraper-6ffb444bf9-x9rr4   kubernetes-dashboard
	5f46110128fb8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   29 seconds ago       Running             kubernetes-dashboard        0                   18f58d81f5293       kubernetes-dashboard-855c9754f9-xr9sp        kubernetes-dashboard
	724b420b38a8e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   f4bf9f8d9df3c       coredns-66bc5c9577-qz7kw                     kube-system
	875b00f702502       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   8f96978202ba8       busybox                                      default
	479a4c6a54e2e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   c7502ccae1d7e       kube-proxy-24vhq                             kube-system
	dd48b184df6b1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   3da5924a0a7d6       kindnet-ljpdl                                kube-system
	8323fd99c8dde       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           54 seconds ago       Exited              storage-provisioner         1                   31aa621d8b165       storage-provisioner                          kube-system
	ca197e5ccabad       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   efd4b36a2975f       etcd-no-preload-985461                       kube-system
	e6dcc041a964a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   ebaa76727bb26       kube-scheduler-no-preload-985461             kube-system
	ad4b2abb5a0c0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   46bad781ac1db       kube-apiserver-no-preload-985461             kube-system
	94ced949d329c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   12f9c71e2925a       kube-controller-manager-no-preload-985461    kube-system
	
	
	==> coredns [724b420b38a8e6b1d29e11c05f3913668233120711d3308275301eca8aaa8fd5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46308 - 4391 "HINFO IN 7731610983691727930.1757820391371373112. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025507277s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-985461
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-985461
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=no-preload-985461
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_15_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:15:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-985461
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:17:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:16:52 +0000   Mon, 13 Oct 2025 23:15:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:16:52 +0000   Mon, 13 Oct 2025 23:15:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:16:52 +0000   Mon, 13 Oct 2025 23:15:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:16:52 +0000   Mon, 13 Oct 2025 23:15:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-985461
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                c81637a3-d3d8-45df-8334-a3fb5c4d8e37
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-qz7kw                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m
	  kube-system                 etcd-no-preload-985461                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m6s
	  kube-system                 kindnet-ljpdl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-no-preload-985461              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-no-preload-985461     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-24vhq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-scheduler-no-preload-985461              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-x9rr4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xr9sp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 118s                   kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m18s (x8 over 2m18s)  kubelet          Node no-preload-985461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m18s (x8 over 2m18s)  kubelet          Node no-preload-985461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m18s (x8 over 2m18s)  kubelet          Node no-preload-985461 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m6s                   kubelet          Node no-preload-985461 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m6s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m6s                   kubelet          Node no-preload-985461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m6s                   kubelet          Node no-preload-985461 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m1s                   node-controller  Node no-preload-985461 event: Registered Node no-preload-985461 in Controller
	  Normal   NodeReady                96s                    kubelet          Node no-preload-985461 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node no-preload-985461 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node no-preload-985461 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node no-preload-985461 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node no-preload-985461 event: Registered Node no-preload-985461 in Controller
	
	
	==> dmesg <==
	[Oct13 22:53] overlayfs: idmapped layers are currently not supported
	[Oct13 22:54] overlayfs: idmapped layers are currently not supported
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	[Oct13 23:16] overlayfs: idmapped layers are currently not supported
	[ +36.293322] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ca197e5ccabade51478fd7728ee7b5ca28a1bdcb05fde64e5acff9535fc178cc] <==
	{"level":"warn","ts":"2025-10-13T23:16:19.495296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.539275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.578040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.612073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.632088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.669106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.742239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.780196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.835511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.877155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.922385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:19.962373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.031437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.053270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.076799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.108062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.134676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.166452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.254506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.275568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.316174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.340989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.363655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.396707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:20.464668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35316","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:17:17 up  2:59,  0 user,  load average: 4.14, 3.45, 2.73
	Linux no-preload-985461 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dd48b184df6b143bf67e927d5aded7eb332ee9943358347aa34f17b9d3a0e99a] <==
	I1013 23:16:23.045253       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:16:23.103595       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:16:23.103828       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:16:23.103894       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:16:23.103945       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:16:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:16:23.304508       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:16:23.304588       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:16:23.304622       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:16:23.305330       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:16:53.305232       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 23:16:53.305357       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 23:16:53.305471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:16:53.305547       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1013 23:16:54.705026       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:16:54.705100       1 metrics.go:72] Registering metrics
	I1013 23:16:54.705168       1 controller.go:711] "Syncing nftables rules"
	I1013 23:17:03.307164       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:17:03.307293       1 main.go:301] handling current node
	I1013 23:17:13.311343       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:17:13.311394       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ad4b2abb5a0c03dab14186bfcfe871a8269efe62dea94aa86fb792c8533ea086] <==
	I1013 23:16:21.676845       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 23:16:21.677013       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 23:16:21.677193       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 23:16:21.693758       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 23:16:21.694572       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 23:16:21.694699       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 23:16:21.697695       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 23:16:21.697722       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 23:16:21.699325       1 aggregator.go:171] initial CRD sync complete...
	I1013 23:16:21.699334       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 23:16:21.699338       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 23:16:21.699344       1 cache.go:39] Caches are synced for autoregister controller
	I1013 23:16:21.698854       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 23:16:21.729556       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:16:22.207584       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 23:16:22.255512       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:16:22.315703       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:16:22.387779       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:16:22.405105       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:16:22.416104       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:16:22.610545       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.123.223"}
	I1013 23:16:22.657182       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.131.66"}
	I1013 23:16:25.068227       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:16:25.268499       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:16:25.316623       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [94ced949d329ca42c57c0dcc0ab094d100a77886a09898107cad3e81fce3ff81] <==
	I1013 23:16:24.871522       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 23:16:24.871601       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 23:16:24.872626       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:16:24.875402       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:16:24.899766       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 23:16:24.902107       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 23:16:24.906366       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 23:16:24.907614       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:16:24.908687       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 23:16:24.910041       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 23:16:24.910104       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:16:24.910290       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 23:16:24.910342       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 23:16:24.910396       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:16:24.910696       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 23:16:24.910946       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 23:16:24.911018       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 23:16:24.911136       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-985461"
	I1013 23:16:24.911197       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 23:16:24.911251       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 23:16:24.911183       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 23:16:24.912331       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 23:16:24.912377       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:16:24.917963       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 23:16:24.928208       1 shared_informer.go:356] "Caches are synced" controller="service account"
	
	
	==> kube-proxy [479a4c6a54e2e68afddb1aa673dc26a32c4ad999c480d6344380a6d38afa6fce] <==
	I1013 23:16:23.061143       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:16:23.164925       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:16:23.265928       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:16:23.265962       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 23:16:23.266046       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:16:23.283840       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:16:23.283889       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:16:23.287176       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:16:23.287564       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:16:23.287624       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:16:23.291744       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:16:23.291818       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:16:23.291918       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:16:23.291968       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:16:23.292156       1 config.go:200] "Starting service config controller"
	I1013 23:16:23.292203       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:16:23.292597       1 config.go:309] "Starting node config controller"
	I1013 23:16:23.297909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:16:23.297938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:16:23.392165       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 23:16:23.392169       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:16:23.393324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e6dcc041a964a5908141d63a7f42e4506831ad6091f46c16ae0d0d31a11158dd] <==
	I1013 23:16:19.565436       1 serving.go:386] Generated self-signed cert in-memory
	W1013 23:16:21.448230       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 23:16:21.448271       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 23:16:21.448282       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 23:16:21.448289       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 23:16:21.605601       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 23:16:21.605696       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:16:21.608444       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:16:21.614953       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 23:16:21.615024       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:16:21.615330       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:16:21.717187       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:16:25 no-preload-985461 kubelet[764]: I1013 23:16:25.535228     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnpw2\" (UniqueName: \"kubernetes.io/projected/22d3739f-30bb-4e05-8339-7f1c5f1519af-kube-api-access-qnpw2\") pod \"dashboard-metrics-scraper-6ffb444bf9-x9rr4\" (UID: \"22d3739f-30bb-4e05-8339-7f1c5f1519af\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4"
	Oct 13 23:16:25 no-preload-985461 kubelet[764]: W1013 23:16:25.739754     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/crio-1c008cfe54b9ab83eec1cf7dad76e0656e486b609793c09da10f20999b89336a WatchSource:0}: Error finding container 1c008cfe54b9ab83eec1cf7dad76e0656e486b609793c09da10f20999b89336a: Status 404 returned error can't find the container with id 1c008cfe54b9ab83eec1cf7dad76e0656e486b609793c09da10f20999b89336a
	Oct 13 23:16:25 no-preload-985461 kubelet[764]: W1013 23:16:25.763457     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/24722b872d757973ac7e4adcf8ff60858f4cc7e0eb3634b1c79004abae15fdad/crio-18f58d81f5293888ccece311303cecf3fa54f709b51e52b1312ed0b322b5b634 WatchSource:0}: Error finding container 18f58d81f5293888ccece311303cecf3fa54f709b51e52b1312ed0b322b5b634: Status 404 returned error can't find the container with id 18f58d81f5293888ccece311303cecf3fa54f709b51e52b1312ed0b322b5b634
	Oct 13 23:16:26 no-preload-985461 kubelet[764]: I1013 23:16:26.903596     764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 23:16:32 no-preload-985461 kubelet[764]: I1013 23:16:32.596517     764 scope.go:117] "RemoveContainer" containerID="7befa0c1e386d5ecf0d505a8ffc6ebf0c089bd839553bc07d5333c5fd06abd75"
	Oct 13 23:16:33 no-preload-985461 kubelet[764]: I1013 23:16:33.601461     764 scope.go:117] "RemoveContainer" containerID="7befa0c1e386d5ecf0d505a8ffc6ebf0c089bd839553bc07d5333c5fd06abd75"
	Oct 13 23:16:33 no-preload-985461 kubelet[764]: I1013 23:16:33.601614     764 scope.go:117] "RemoveContainer" containerID="d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d"
	Oct 13 23:16:33 no-preload-985461 kubelet[764]: E1013 23:16:33.601767     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:16:34 no-preload-985461 kubelet[764]: I1013 23:16:34.605678     764 scope.go:117] "RemoveContainer" containerID="d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d"
	Oct 13 23:16:34 no-preload-985461 kubelet[764]: E1013 23:16:34.605838     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:16:35 no-preload-985461 kubelet[764]: I1013 23:16:35.714192     764 scope.go:117] "RemoveContainer" containerID="d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d"
	Oct 13 23:16:35 no-preload-985461 kubelet[764]: E1013 23:16:35.714384     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:16:48 no-preload-985461 kubelet[764]: I1013 23:16:48.452362     764 scope.go:117] "RemoveContainer" containerID="d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d"
	Oct 13 23:16:48 no-preload-985461 kubelet[764]: I1013 23:16:48.659382     764 scope.go:117] "RemoveContainer" containerID="d7fa755e2214bb554fa7a6ef8e5c2bcbda71243769bc128fe8cb6429d468908d"
	Oct 13 23:16:48 no-preload-985461 kubelet[764]: I1013 23:16:48.659938     764 scope.go:117] "RemoveContainer" containerID="a981ae5b4f20098f6e1818b72d9c111968a396c9fa165c85bbee0a671f77046f"
	Oct 13 23:16:48 no-preload-985461 kubelet[764]: E1013 23:16:48.660562     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:16:48 no-preload-985461 kubelet[764]: I1013 23:16:48.744102     764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xr9sp" podStartSLOduration=1.522882117 podStartE2EDuration="23.744077899s" podCreationTimestamp="2025-10-13 23:16:25 +0000 UTC" firstStartedPulling="2025-10-13 23:16:25.766169784 +0000 UTC m=+10.585145053" lastFinishedPulling="2025-10-13 23:16:47.987365558 +0000 UTC m=+32.806340835" observedRunningTime="2025-10-13 23:16:48.687025393 +0000 UTC m=+33.506000679" watchObservedRunningTime="2025-10-13 23:16:48.744077899 +0000 UTC m=+33.563053185"
	Oct 13 23:16:53 no-preload-985461 kubelet[764]: I1013 23:16:53.674155     764 scope.go:117] "RemoveContainer" containerID="8323fd99c8ddeb0e49de9a6be3e47d906e010bcdf3332b71881843c6b8fea91d"
	Oct 13 23:16:55 no-preload-985461 kubelet[764]: I1013 23:16:55.713932     764 scope.go:117] "RemoveContainer" containerID="a981ae5b4f20098f6e1818b72d9c111968a396c9fa165c85bbee0a671f77046f"
	Oct 13 23:16:55 no-preload-985461 kubelet[764]: E1013 23:16:55.714551     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:17:07 no-preload-985461 kubelet[764]: I1013 23:17:07.452201     764 scope.go:117] "RemoveContainer" containerID="a981ae5b4f20098f6e1818b72d9c111968a396c9fa165c85bbee0a671f77046f"
	Oct 13 23:17:07 no-preload-985461 kubelet[764]: E1013 23:17:07.452483     764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x9rr4_kubernetes-dashboard(22d3739f-30bb-4e05-8339-7f1c5f1519af)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x9rr4" podUID="22d3739f-30bb-4e05-8339-7f1c5f1519af"
	Oct 13 23:17:11 no-preload-985461 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:17:11 no-preload-985461 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:17:11 no-preload-985461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5f46110128fb81d270b1cec6e2b2f4f4bf290629ba0e722e52328c484d8606b7] <==
	2025/10/13 23:16:48 Starting overwatch
	2025/10/13 23:16:48 Using namespace: kubernetes-dashboard
	2025/10/13 23:16:48 Using in-cluster config to connect to apiserver
	2025/10/13 23:16:48 Using secret token for csrf signing
	2025/10/13 23:16:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 23:16:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 23:16:48 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 23:16:48 Generating JWE encryption key
	2025/10/13 23:16:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 23:16:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 23:16:49 Initializing JWE encryption key from synchronized object
	2025/10/13 23:16:49 Creating in-cluster Sidecar client
	2025/10/13 23:16:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 23:16:49 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [0db2d59931d13f60422e1539c3f3230d6661662fb0a0ab38979ecfc2fbf06893] <==
	I1013 23:16:53.789851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 23:16:53.816589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 23:16:53.816662       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 23:16:53.820862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:16:57.276634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:01.536961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:05.136283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:08.190550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:11.213401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:11.221374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:17:11.221606       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 23:17:11.224136       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-985461_45d41c12-7526-46cc-b79b-cdab59c08b7a!
	W1013 23:17:11.227261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:17:11.228169       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6dee9d5f-8952-4fb3-ad36-2f1171378517", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-985461_45d41c12-7526-46cc-b79b-cdab59c08b7a became leader
	W1013 23:17:11.232820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:17:11.325206       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-985461_45d41c12-7526-46cc-b79b-cdab59c08b7a!
	W1013 23:17:13.245340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:13.253399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:15.258270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:15.266517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:17.277384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:17.282494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8323fd99c8ddeb0e49de9a6be3e47d906e010bcdf3332b71881843c6b8fea91d] <==
	I1013 23:16:22.972478       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 23:16:52.974452       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-985461 -n no-preload-985461
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-985461 -n no-preload-985461: exit status 2 (494.26888ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-985461 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-505482 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-505482 --alsologtostderr -v=1: exit status 80 (2.705558866s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-505482 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 23:17:49.774213  630629 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:17:49.774386  630629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:17:49.774400  630629 out.go:374] Setting ErrFile to fd 2...
	I1013 23:17:49.774406  630629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:17:49.774647  630629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:17:49.774897  630629 out.go:368] Setting JSON to false
	I1013 23:17:49.774917  630629 mustload.go:65] Loading cluster: embed-certs-505482
	I1013 23:17:49.775296  630629 config.go:182] Loaded profile config "embed-certs-505482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:17:49.775733  630629 cli_runner.go:164] Run: docker container inspect embed-certs-505482 --format={{.State.Status}}
	I1013 23:17:49.805155  630629 host.go:66] Checking if "embed-certs-505482" exists ...
	I1013 23:17:49.805472  630629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:17:49.924635  630629 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-13 23:17:49.909561536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:17:49.925291  630629 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-505482 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 23:17:49.928753  630629 out.go:179] * Pausing node embed-certs-505482 ... 
	I1013 23:17:49.931687  630629 host.go:66] Checking if "embed-certs-505482" exists ...
	I1013 23:17:49.932044  630629 ssh_runner.go:195] Run: systemctl --version
	I1013 23:17:49.932084  630629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-505482
	I1013 23:17:49.954896  630629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/embed-certs-505482/id_rsa Username:docker}
	I1013 23:17:50.086143  630629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:17:50.113543  630629 pause.go:52] kubelet running: true
	I1013 23:17:50.113634  630629 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:17:50.514429  630629 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:17:50.514514  630629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:17:50.668298  630629 cri.go:89] found id: "a9da06268e166cc7bbb3283ba0c467ae5d738f271b8acf1a926511234fa8f03e"
	I1013 23:17:50.668323  630629 cri.go:89] found id: "f4a1214c931c9defa831cf0eaeec82e7070c56e644d8b14c06ce8faf2632027b"
	I1013 23:17:50.668328  630629 cri.go:89] found id: "1de622fa96b2bb4766f5054c4bff72b46522d9894bb62e172bced8c9bfb56f38"
	I1013 23:17:50.668332  630629 cri.go:89] found id: "2db03c15b29f4470dc1af87e61bd98914b8a2d2e891887bb6cbb765ef7b8f52c"
	I1013 23:17:50.668339  630629 cri.go:89] found id: "d2eeb55a841266881586a9e6bb16d8a862f1e4e7acc16d9ad2aa9d2515547900"
	I1013 23:17:50.668343  630629 cri.go:89] found id: "964e0548ee889c7cb00c0e33604118130c516ddd2211c9537910442a46e17ed5"
	I1013 23:17:50.668347  630629 cri.go:89] found id: "116eb96f8d736a4d212167c1ba57bf8044972f29d8801f70ffca6261a57399b3"
	I1013 23:17:50.668350  630629 cri.go:89] found id: "571a3921ae313b746dc750f163cd023508f28ff3bf97977e5f8f7faab03157e7"
	I1013 23:17:50.668363  630629 cri.go:89] found id: "dd86b0b8cf2e77ea5e9fb894aa6375e33bcdad7cd483eb155b4e5002125e49b7"
	I1013 23:17:50.668371  630629 cri.go:89] found id: "63b9c13139dbcd6c91af55e26b063d2ac5b5eae2e2e5be10588c9fe277923514"
	I1013 23:17:50.668376  630629 cri.go:89] found id: "5f462a4795dc27d43d5a62445569013d3c16f0e890b67a12d67306948c7749d7"
	I1013 23:17:50.668380  630629 cri.go:89] found id: ""
	I1013 23:17:50.668434  630629 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:17:50.690110  630629 retry.go:31] will retry after 354.907277ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:17:50Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:17:51.045664  630629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:17:51.065368  630629 pause.go:52] kubelet running: false
	I1013 23:17:51.065441  630629 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:17:51.337999  630629 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:17:51.338078  630629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:17:51.486976  630629 cri.go:89] found id: "a9da06268e166cc7bbb3283ba0c467ae5d738f271b8acf1a926511234fa8f03e"
	I1013 23:17:51.487000  630629 cri.go:89] found id: "f4a1214c931c9defa831cf0eaeec82e7070c56e644d8b14c06ce8faf2632027b"
	I1013 23:17:51.487005  630629 cri.go:89] found id: "1de622fa96b2bb4766f5054c4bff72b46522d9894bb62e172bced8c9bfb56f38"
	I1013 23:17:51.487009  630629 cri.go:89] found id: "2db03c15b29f4470dc1af87e61bd98914b8a2d2e891887bb6cbb765ef7b8f52c"
	I1013 23:17:51.487012  630629 cri.go:89] found id: "d2eeb55a841266881586a9e6bb16d8a862f1e4e7acc16d9ad2aa9d2515547900"
	I1013 23:17:51.487015  630629 cri.go:89] found id: "964e0548ee889c7cb00c0e33604118130c516ddd2211c9537910442a46e17ed5"
	I1013 23:17:51.487019  630629 cri.go:89] found id: "116eb96f8d736a4d212167c1ba57bf8044972f29d8801f70ffca6261a57399b3"
	I1013 23:17:51.487022  630629 cri.go:89] found id: "571a3921ae313b746dc750f163cd023508f28ff3bf97977e5f8f7faab03157e7"
	I1013 23:17:51.487026  630629 cri.go:89] found id: "dd86b0b8cf2e77ea5e9fb894aa6375e33bcdad7cd483eb155b4e5002125e49b7"
	I1013 23:17:51.487032  630629 cri.go:89] found id: "63b9c13139dbcd6c91af55e26b063d2ac5b5eae2e2e5be10588c9fe277923514"
	I1013 23:17:51.487036  630629 cri.go:89] found id: "5f462a4795dc27d43d5a62445569013d3c16f0e890b67a12d67306948c7749d7"
	I1013 23:17:51.487039  630629 cri.go:89] found id: ""
	I1013 23:17:51.487102  630629 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:17:51.502950  630629 retry.go:31] will retry after 400.777996ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:17:51Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:17:51.904233  630629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:17:51.920445  630629 pause.go:52] kubelet running: false
	I1013 23:17:51.920518  630629 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:17:52.210927  630629 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:17:52.211099  630629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:17:52.353889  630629 cri.go:89] found id: "a9da06268e166cc7bbb3283ba0c467ae5d738f271b8acf1a926511234fa8f03e"
	I1013 23:17:52.353917  630629 cri.go:89] found id: "f4a1214c931c9defa831cf0eaeec82e7070c56e644d8b14c06ce8faf2632027b"
	I1013 23:17:52.353922  630629 cri.go:89] found id: "1de622fa96b2bb4766f5054c4bff72b46522d9894bb62e172bced8c9bfb56f38"
	I1013 23:17:52.353926  630629 cri.go:89] found id: "2db03c15b29f4470dc1af87e61bd98914b8a2d2e891887bb6cbb765ef7b8f52c"
	I1013 23:17:52.353943  630629 cri.go:89] found id: "d2eeb55a841266881586a9e6bb16d8a862f1e4e7acc16d9ad2aa9d2515547900"
	I1013 23:17:52.353950  630629 cri.go:89] found id: "964e0548ee889c7cb00c0e33604118130c516ddd2211c9537910442a46e17ed5"
	I1013 23:17:52.353953  630629 cri.go:89] found id: "116eb96f8d736a4d212167c1ba57bf8044972f29d8801f70ffca6261a57399b3"
	I1013 23:17:52.353956  630629 cri.go:89] found id: "571a3921ae313b746dc750f163cd023508f28ff3bf97977e5f8f7faab03157e7"
	I1013 23:17:52.353960  630629 cri.go:89] found id: "dd86b0b8cf2e77ea5e9fb894aa6375e33bcdad7cd483eb155b4e5002125e49b7"
	I1013 23:17:52.353966  630629 cri.go:89] found id: "63b9c13139dbcd6c91af55e26b063d2ac5b5eae2e2e5be10588c9fe277923514"
	I1013 23:17:52.353972  630629 cri.go:89] found id: "5f462a4795dc27d43d5a62445569013d3c16f0e890b67a12d67306948c7749d7"
	I1013 23:17:52.353975  630629 cri.go:89] found id: ""
	I1013 23:17:52.354042  630629 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:17:52.377752  630629 out.go:203] 
	W1013 23:17:52.380586  630629 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:17:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:17:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 23:17:52.380607  630629 out.go:285] * 
	* 
	W1013 23:17:52.388430  630629 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 23:17:52.390282  630629 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-505482 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-505482
helpers_test.go:243: (dbg) docker inspect embed-certs-505482:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b",
	        "Created": "2025-10-13T23:14:55.44592554Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 624876,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:16:43.298339011Z",
	            "FinishedAt": "2025-10-13T23:16:42.219260272Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/hostname",
	        "HostsPath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/hosts",
	        "LogPath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b-json.log",
	        "Name": "/embed-certs-505482",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-505482:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-505482",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b",
	                "LowerDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-505482",
	                "Source": "/var/lib/docker/volumes/embed-certs-505482/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-505482",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-505482",
	                "name.minikube.sigs.k8s.io": "embed-certs-505482",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1985e18a843a7172918dca7b3cf26f0da0522f65f424def1131e02efefa659a4",
	            "SandboxKey": "/var/run/docker/netns/1985e18a843a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-505482": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:fc:a3:9f:05:12",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "23158782726c8cb4fc25349485432199b9ed3873182fa18e871d267e9c5dee9e",
	                    "EndpointID": "c3563e5a776a234d0df032614b0b148f42c996e83fe8aa0ba40ecd7cf151b219",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-505482",
	                        "a9accf0872e7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-505482 -n embed-certs-505482
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-505482 -n embed-certs-505482: exit status 2 (521.725112ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-505482 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-505482 logs -n 25: (1.995526935s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275       │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ image   │ old-k8s-version-670275 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670275       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ pause   │ -p old-k8s-version-670275 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670275       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │                     │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:15 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-896873       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p cert-expiration-896873                                                                                                                                                                                                                     │ cert-expiration-896873       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-985461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │                     │
	│ stop    │ -p no-preload-985461 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p no-preload-985461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	│ stop    │ -p embed-certs-505482 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-505482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:17 UTC │
	│ image   │ no-preload-985461 image list --format=json                                                                                                                                                                                                    │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p no-preload-985461 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p disable-driver-mounts-320520                                                                                                                                                                                                               │ disable-driver-mounts-320520 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ start   │ -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ image   │ embed-certs-505482 image list --format=json                                                                                                                                                                                                   │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p embed-certs-505482 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:17:22
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:17:22.597368  628422 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:17:22.597589  628422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:17:22.597622  628422 out.go:374] Setting ErrFile to fd 2...
	I1013 23:17:22.597643  628422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:17:22.597927  628422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:17:22.598360  628422 out.go:368] Setting JSON to false
	I1013 23:17:22.599371  628422 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10779,"bootTime":1760386664,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:17:22.599467  628422 start.go:141] virtualization:  
	I1013 23:17:22.606120  628422 out.go:179] * [default-k8s-diff-port-033746] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:17:22.609606  628422 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:17:22.609680  628422 notify.go:220] Checking for updates...
	I1013 23:17:22.616422  628422 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:17:22.619666  628422 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:17:22.622801  628422 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:17:22.627294  628422 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:17:22.630626  628422 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:17:22.634216  628422 config.go:182] Loaded profile config "embed-certs-505482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:17:22.634413  628422 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:17:22.672383  628422 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:17:22.672517  628422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:17:22.765973  628422 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:17:22.756381382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:17:22.766075  628422 docker.go:318] overlay module found
	I1013 23:17:22.770073  628422 out.go:179] * Using the docker driver based on user configuration
	I1013 23:17:22.773315  628422 start.go:305] selected driver: docker
	I1013 23:17:22.773334  628422 start.go:925] validating driver "docker" against <nil>
	I1013 23:17:22.773346  628422 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:17:22.774037  628422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:17:22.891535  628422 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:17:22.874497635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:17:22.891707  628422 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 23:17:22.891957  628422 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:17:22.896850  628422 out.go:179] * Using Docker driver with root privileges
	I1013 23:17:22.899855  628422 cni.go:84] Creating CNI manager for ""
	I1013 23:17:22.899928  628422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:17:22.899946  628422 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 23:17:22.900039  628422 start.go:349] cluster config:
	{Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:17:22.903616  628422 out.go:179] * Starting "default-k8s-diff-port-033746" primary control-plane node in "default-k8s-diff-port-033746" cluster
	I1013 23:17:22.906569  628422 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:17:22.909600  628422 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:17:22.912699  628422 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:17:22.912751  628422 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:17:22.912761  628422 cache.go:58] Caching tarball of preloaded images
	I1013 23:17:22.912777  628422 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:17:22.912838  628422 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:17:22.912847  628422 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:17:22.912958  628422 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/config.json ...
	I1013 23:17:22.912975  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/config.json: {Name:mk8b5f27a831e52eb3ac20cd660bcee717949d3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:22.940205  628422 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:17:22.940242  628422 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:17:22.940256  628422 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:17:22.940279  628422 start.go:360] acquireMachinesLock for default-k8s-diff-port-033746: {Name:mk4950372c3cd6b03a758b4772e5c43a69d20962 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:17:22.940582  628422 start.go:364] duration metric: took 191.718µs to acquireMachinesLock for "default-k8s-diff-port-033746"
	I1013 23:17:22.940620  628422 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:17:22.940720  628422 start.go:125] createHost starting for "" (driver="docker")
	W1013 23:17:20.273727  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:22.280587  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	I1013 23:17:22.946588  628422 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 23:17:22.946850  628422 start.go:159] libmachine.API.Create for "default-k8s-diff-port-033746" (driver="docker")
	I1013 23:17:22.946907  628422 client.go:168] LocalClient.Create starting
	I1013 23:17:22.946990  628422 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem
	I1013 23:17:22.947028  628422 main.go:141] libmachine: Decoding PEM data...
	I1013 23:17:22.947046  628422 main.go:141] libmachine: Parsing certificate...
	I1013 23:17:22.947248  628422 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem
	I1013 23:17:22.947295  628422 main.go:141] libmachine: Decoding PEM data...
	I1013 23:17:22.947315  628422 main.go:141] libmachine: Parsing certificate...
	I1013 23:17:22.947680  628422 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-033746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 23:17:22.979733  628422 cli_runner.go:211] docker network inspect default-k8s-diff-port-033746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 23:17:22.979827  628422 network_create.go:284] running [docker network inspect default-k8s-diff-port-033746] to gather additional debugging logs...
	I1013 23:17:22.979845  628422 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-033746
	W1013 23:17:23.009760  628422 cli_runner.go:211] docker network inspect default-k8s-diff-port-033746 returned with exit code 1
	I1013 23:17:23.009797  628422 network_create.go:287] error running [docker network inspect default-k8s-diff-port-033746]: docker network inspect default-k8s-diff-port-033746: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-033746 not found
	I1013 23:17:23.009812  628422 network_create.go:289] output of [docker network inspect default-k8s-diff-port-033746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-033746 not found
	
	** /stderr **
	I1013 23:17:23.009938  628422 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:17:23.029532  628422 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-daf8f67114ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:2a:b3:49:6d:63} reservation:<nil>}
	I1013 23:17:23.029783  628422 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-57d99f1e9609 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:17:72:4c:c8:ba} reservation:<nil>}
	I1013 23:17:23.030158  628422 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-33ec4a6ec514 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:b6:7d:bc:fd} reservation:<nil>}
	I1013 23:17:23.030560  628422 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-23158782726c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:3f:b9:82:32:ff} reservation:<nil>}
	I1013 23:17:23.031165  628422 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019689b0}
	I1013 23:17:23.031194  628422 network_create.go:124] attempt to create docker network default-k8s-diff-port-033746 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 23:17:23.031271  628422 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-033746 default-k8s-diff-port-033746
	I1013 23:17:23.124512  628422 network_create.go:108] docker network default-k8s-diff-port-033746 192.168.85.0/24 created
	I1013 23:17:23.124550  628422 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-033746" container
	I1013 23:17:23.124644  628422 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 23:17:23.154654  628422 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-033746 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-033746 --label created_by.minikube.sigs.k8s.io=true
	I1013 23:17:23.174994  628422 oci.go:103] Successfully created a docker volume default-k8s-diff-port-033746
	I1013 23:17:23.175194  628422 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-033746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-033746 --entrypoint /usr/bin/test -v default-k8s-diff-port-033746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 23:17:24.171446  628422 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-033746
	I1013 23:17:24.171497  628422 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:17:24.171517  628422 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 23:17:24.171587  628422 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-033746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	W1013 23:17:24.782212  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:27.273515  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	I1013 23:17:28.627779  628422 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-033746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.456138737s)
	I1013 23:17:28.627816  628422 kic.go:203] duration metric: took 4.45629601s to extract preloaded images to volume ...
	W1013 23:17:28.627943  628422 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 23:17:28.628062  628422 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 23:17:28.681355  628422 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-033746 --name default-k8s-diff-port-033746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-033746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-033746 --network default-k8s-diff-port-033746 --ip 192.168.85.2 --volume default-k8s-diff-port-033746:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 23:17:29.053428  628422 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Running}}
	I1013 23:17:29.077200  628422 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:17:29.105211  628422 cli_runner.go:164] Run: docker exec default-k8s-diff-port-033746 stat /var/lib/dpkg/alternatives/iptables
	I1013 23:17:29.163007  628422 oci.go:144] the created container "default-k8s-diff-port-033746" has a running status.
	I1013 23:17:29.163043  628422 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa...
	I1013 23:17:29.470708  628422 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 23:17:29.497454  628422 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:17:29.525078  628422 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 23:17:29.525106  628422 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-033746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 23:17:29.602760  628422 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:17:29.629401  628422 machine.go:93] provisionDockerMachine start ...
	I1013 23:17:29.629509  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:29.656490  628422 main.go:141] libmachine: Using SSH client type: native
	I1013 23:17:29.656834  628422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1013 23:17:29.656852  628422 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:17:29.657536  628422 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1013 23:17:29.273956  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:31.773018  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	I1013 23:17:32.803208  628422 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-033746
	
	I1013 23:17:32.803233  628422 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-033746"
	I1013 23:17:32.803293  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:32.821033  628422 main.go:141] libmachine: Using SSH client type: native
	I1013 23:17:32.821357  628422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1013 23:17:32.821375  628422 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-033746 && echo "default-k8s-diff-port-033746" | sudo tee /etc/hostname
	I1013 23:17:32.987605  628422 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-033746
	
	I1013 23:17:32.987778  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:33.010463  628422 main.go:141] libmachine: Using SSH client type: native
	I1013 23:17:33.010963  628422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1013 23:17:33.010998  628422 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-033746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-033746/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-033746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:17:33.159410  628422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:17:33.159448  628422 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:17:33.159518  628422 ubuntu.go:190] setting up certificates
	I1013 23:17:33.159530  628422 provision.go:84] configureAuth start
	I1013 23:17:33.159616  628422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:17:33.176520  628422 provision.go:143] copyHostCerts
	I1013 23:17:33.176607  628422 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:17:33.176623  628422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:17:33.176700  628422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:17:33.176791  628422 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:17:33.176802  628422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:17:33.176829  628422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:17:33.176885  628422 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:17:33.176895  628422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:17:33.176919  628422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:17:33.176971  628422 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-033746 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-033746 localhost minikube]
	I1013 23:17:34.217083  628422 provision.go:177] copyRemoteCerts
	I1013 23:17:34.217153  628422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:17:34.217201  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:34.234756  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:17:34.339185  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 23:17:34.357620  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:17:34.378788  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:17:34.398246  628422 provision.go:87] duration metric: took 1.238686044s to configureAuth
	I1013 23:17:34.398276  628422 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:17:34.398471  628422 config.go:182] Loaded profile config "default-k8s-diff-port-033746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:17:34.398599  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:34.415905  628422 main.go:141] libmachine: Using SSH client type: native
	I1013 23:17:34.416261  628422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1013 23:17:34.416282  628422 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:17:34.762622  628422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:17:34.762647  628422 machine.go:96] duration metric: took 5.133223131s to provisionDockerMachine
	I1013 23:17:34.762674  628422 client.go:171] duration metric: took 11.815738564s to LocalClient.Create
	I1013 23:17:34.762690  628422 start.go:167] duration metric: took 11.815842078s to libmachine.API.Create "default-k8s-diff-port-033746"
	I1013 23:17:34.762701  628422 start.go:293] postStartSetup for "default-k8s-diff-port-033746" (driver="docker")
	I1013 23:17:34.762714  628422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:17:34.762790  628422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:17:34.762848  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:34.781135  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:17:34.883383  628422 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:17:34.886525  628422 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:17:34.886551  628422 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:17:34.886561  628422 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:17:34.886621  628422 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:17:34.886698  628422 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:17:34.886803  628422 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:17:34.899661  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:17:34.921786  628422 start.go:296] duration metric: took 159.06412ms for postStartSetup
	I1013 23:17:34.922186  628422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:17:34.939038  628422 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/config.json ...
	I1013 23:17:34.939366  628422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:17:34.939424  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:34.957087  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:17:35.060077  628422 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:17:35.064631  628422 start.go:128] duration metric: took 12.123873733s to createHost
	I1013 23:17:35.064657  628422 start.go:83] releasing machines lock for "default-k8s-diff-port-033746", held for 12.124056826s
	I1013 23:17:35.064741  628422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:17:35.083257  628422 ssh_runner.go:195] Run: cat /version.json
	I1013 23:17:35.083297  628422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:17:35.083335  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:35.083386  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:35.106529  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:17:35.125668  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:17:35.303020  628422 ssh_runner.go:195] Run: systemctl --version
	I1013 23:17:35.309553  628422 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:17:35.353541  628422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:17:35.358039  628422 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:17:35.358143  628422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:17:35.387672  628422 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 23:17:35.387694  628422 start.go:495] detecting cgroup driver to use...
	I1013 23:17:35.387727  628422 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:17:35.387779  628422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:17:35.405407  628422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:17:35.418533  628422 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:17:35.418640  628422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:17:35.436293  628422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:17:35.454316  628422 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:17:35.582877  628422 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:17:35.713194  628422 docker.go:234] disabling docker service ...
	I1013 23:17:35.713286  628422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:17:35.737565  628422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:17:35.753883  628422 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:17:35.905536  628422 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:17:36.040460  628422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:17:36.055490  628422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:17:36.070769  628422 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:17:36.070887  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.080392  628422 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:17:36.080506  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.089831  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.100068  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.109760  628422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:17:36.118416  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.127590  628422 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.141532  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.150696  628422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:17:36.158466  628422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:17:36.166171  628422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:17:36.298507  628422 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:17:36.426153  628422 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:17:36.426273  628422 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:17:36.430303  628422 start.go:563] Will wait 60s for crictl version
	I1013 23:17:36.430416  628422 ssh_runner.go:195] Run: which crictl
	I1013 23:17:36.434036  628422 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:17:36.462807  628422 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:17:36.462943  628422 ssh_runner.go:195] Run: crio --version
	I1013 23:17:36.499756  628422 ssh_runner.go:195] Run: crio --version
	I1013 23:17:36.538809  628422 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:17:36.541803  628422 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-033746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:17:36.560551  628422 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 23:17:36.564367  628422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:17:36.574114  628422 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:17:36.574232  628422 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:17:36.574304  628422 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:17:36.609381  628422 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:17:36.609407  628422 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:17:36.609463  628422 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:17:36.636054  628422 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:17:36.636076  628422 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:17:36.636084  628422 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1013 23:17:36.636200  628422 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-033746 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:17:36.636287  628422 ssh_runner.go:195] Run: crio config
	I1013 23:17:36.697799  628422 cni.go:84] Creating CNI manager for ""
	I1013 23:17:36.697824  628422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:17:36.697848  628422 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:17:36.697873  628422 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-033746 NodeName:default-k8s-diff-port-033746 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:17:36.698003  628422 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-033746"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:17:36.698080  628422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:17:36.706991  628422 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:17:36.707107  628422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:17:36.714811  628422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1013 23:17:36.727812  628422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:17:36.741237  628422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1013 23:17:36.755302  628422 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:17:36.759021  628422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:17:36.769467  628422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:17:36.904306  628422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:17:36.928371  628422 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746 for IP: 192.168.85.2
	I1013 23:17:36.928395  628422 certs.go:195] generating shared ca certs ...
	I1013 23:17:36.928412  628422 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:36.929075  628422 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:17:36.929168  628422 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:17:36.929203  628422 certs.go:257] generating profile certs ...
	I1013 23:17:36.929287  628422 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.key
	I1013 23:17:36.929335  628422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt with IP's: []
	I1013 23:17:37.513159  628422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt ...
	I1013 23:17:37.513195  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: {Name:mk3f1999a47229872bbe82dd21e503d52cdc2f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:37.513404  628422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.key ...
	I1013 23:17:37.513423  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.key: {Name:mke849aae776be0331a4f2b5a2d673fd823b3bb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:37.513525  628422 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key.5040eb68
	I1013 23:17:37.513546  628422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt.5040eb68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	W1013 23:17:33.773776  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	I1013 23:17:36.273782  624746 pod_ready.go:94] pod "coredns-66bc5c9577-6rtz5" is "Ready"
	I1013 23:17:36.273807  624746 pod_ready.go:86] duration metric: took 36.506776206s for pod "coredns-66bc5c9577-6rtz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.277428  624746 pod_ready.go:83] waiting for pod "etcd-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.284287  624746 pod_ready.go:94] pod "etcd-embed-certs-505482" is "Ready"
	I1013 23:17:36.284310  624746 pod_ready.go:86] duration metric: took 6.860392ms for pod "etcd-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.287565  624746 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.294345  624746 pod_ready.go:94] pod "kube-apiserver-embed-certs-505482" is "Ready"
	I1013 23:17:36.294374  624746 pod_ready.go:86] duration metric: took 6.785045ms for pod "kube-apiserver-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.299134  624746 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.472070  624746 pod_ready.go:94] pod "kube-controller-manager-embed-certs-505482" is "Ready"
	I1013 23:17:36.472109  624746 pod_ready.go:86] duration metric: took 172.911942ms for pod "kube-controller-manager-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.671667  624746 pod_ready.go:83] waiting for pod "kube-proxy-n2g5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:37.071824  624746 pod_ready.go:94] pod "kube-proxy-n2g5d" is "Ready"
	I1013 23:17:37.071850  624746 pod_ready.go:86] duration metric: took 400.15411ms for pod "kube-proxy-n2g5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:37.272124  624746 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:37.672252  624746 pod_ready.go:94] pod "kube-scheduler-embed-certs-505482" is "Ready"
	I1013 23:17:37.672292  624746 pod_ready.go:86] duration metric: took 400.13903ms for pod "kube-scheduler-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:37.672308  624746 pod_ready.go:40] duration metric: took 37.910206682s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:17:37.785181  624746 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:17:37.788351  624746 out.go:179] * Done! kubectl is now configured to use "embed-certs-505482" cluster and "default" namespace by default
	I1013 23:17:38.059638  628422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt.5040eb68 ...
	I1013 23:17:38.059711  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt.5040eb68: {Name:mkcbf8423ae15290a421453eb7ff40a11672752d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:38.059903  628422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key.5040eb68 ...
	I1013 23:17:38.059922  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key.5040eb68: {Name:mk29a46bfc7ce983dca37486b1f6be13f5b3bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:38.060002  628422 certs.go:382] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt.5040eb68 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt
	I1013 23:17:38.060094  628422 certs.go:386] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key.5040eb68 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key
	I1013 23:17:38.060157  628422 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key
	I1013 23:17:38.060182  628422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.crt with IP's: []
	I1013 23:17:38.543335  628422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.crt ...
	I1013 23:17:38.543409  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.crt: {Name:mkb2fea196c26470f972e26befe5d1ac05b7493a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:38.543644  628422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key ...
	I1013 23:17:38.543688  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key: {Name:mkc8a51b77e58fa63683950881065ff746ff3c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:38.545430  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:17:38.545515  628422 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:17:38.545542  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:17:38.545589  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:17:38.545634  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:17:38.545686  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:17:38.545756  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:17:38.546394  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:17:38.565504  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:17:38.592382  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:17:38.617151  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:17:38.636472  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 23:17:38.655686  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 23:17:38.673813  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:17:38.694847  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 23:17:38.716149  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:17:38.734265  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:17:38.753205  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:17:38.774690  628422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:17:38.787344  628422 ssh_runner.go:195] Run: openssl version
	I1013 23:17:38.793647  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:17:38.802210  628422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:17:38.806052  628422 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:17:38.806151  628422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:17:38.847919  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:17:38.857393  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:17:38.865879  628422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:17:38.870036  628422 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:17:38.870148  628422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:17:38.914381  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:17:38.922855  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:17:38.931402  628422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:17:38.935437  628422 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:17:38.935535  628422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:17:38.976421  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:17:38.985209  628422 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:17:38.988942  628422 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 23:17:38.989032  628422 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:17:38.989120  628422 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:17:38.989188  628422 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:17:39.029932  628422 cri.go:89] found id: ""
	I1013 23:17:39.030014  628422 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:17:39.038543  628422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 23:17:39.046924  628422 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 23:17:39.046991  628422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 23:17:39.055562  628422 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 23:17:39.055637  628422 kubeadm.go:157] found existing configuration files:
	
	I1013 23:17:39.055713  628422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1013 23:17:39.063907  628422 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 23:17:39.064022  628422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 23:17:39.071813  628422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1013 23:17:39.079537  628422 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 23:17:39.079622  628422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 23:17:39.087245  628422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1013 23:17:39.095378  628422 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 23:17:39.095441  628422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 23:17:39.103804  628422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1013 23:17:39.111784  628422 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 23:17:39.111854  628422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 23:17:39.120041  628422 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 23:17:39.159276  628422 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 23:17:39.159642  628422 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 23:17:39.194920  628422 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 23:17:39.194999  628422 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 23:17:39.195049  628422 kubeadm.go:318] OS: Linux
	I1013 23:17:39.195136  628422 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 23:17:39.195192  628422 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 23:17:39.195246  628422 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 23:17:39.195301  628422 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 23:17:39.195355  628422 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 23:17:39.195417  628422 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 23:17:39.195469  628422 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 23:17:39.195523  628422 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 23:17:39.195575  628422 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 23:17:39.276616  628422 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 23:17:39.276737  628422 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 23:17:39.276837  628422 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 23:17:39.291624  628422 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 23:17:39.295346  628422 out.go:252]   - Generating certificates and keys ...
	I1013 23:17:39.295475  628422 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 23:17:39.295564  628422 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 23:17:39.501127  628422 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 23:17:40.383927  628422 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 23:17:41.411937  628422 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 23:17:42.769042  628422 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 23:17:43.068915  628422 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 23:17:43.069373  628422 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-033746 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 23:17:43.326398  628422 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 23:17:43.326816  628422 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-033746 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 23:17:44.836889  628422 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 23:17:45.338367  628422 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 23:17:45.492945  628422 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 23:17:45.493527  628422 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 23:17:46.173043  628422 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 23:17:46.532979  628422 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 23:17:46.795696  628422 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 23:17:47.031979  628422 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 23:17:47.556896  628422 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 23:17:47.557811  628422 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 23:17:47.562940  628422 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 23:17:47.566703  628422 out.go:252]   - Booting up control plane ...
	I1013 23:17:47.566809  628422 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 23:17:47.566890  628422 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 23:17:47.566960  628422 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 23:17:47.583603  628422 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 23:17:47.583733  628422 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 23:17:47.592020  628422 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 23:17:47.592548  628422 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 23:17:47.592626  628422 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 23:17:47.737185  628422 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 23:17:47.737324  628422 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 23:17:48.742709  628422 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001976248s
	I1013 23:17:48.742825  628422 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 23:17:48.742915  628422 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1013 23:17:48.743012  628422 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 23:17:48.743120  628422 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.97603728Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f29eb93a-d2f9-4dff-9baf-6868cdc37de3 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.977015058Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=603bd5ab-2edb-4c23-931f-770eb588d2ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.977261642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.986560786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.986727995Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cc06a14c26a098050d2b1e6cc6e824dd48d029a6cccc0241c4ee8adc9b4dc80b/merged/etc/passwd: no such file or directory"
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.986747901Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cc06a14c26a098050d2b1e6cc6e824dd48d029a6cccc0241c4ee8adc9b4dc80b/merged/etc/group: no such file or directory"
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.986980946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:17:29 embed-certs-505482 crio[652]: time="2025-10-13T23:17:29.022877293Z" level=info msg="Created container a9da06268e166cc7bbb3283ba0c467ae5d738f271b8acf1a926511234fa8f03e: kube-system/storage-provisioner/storage-provisioner" id=603bd5ab-2edb-4c23-931f-770eb588d2ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:17:29 embed-certs-505482 crio[652]: time="2025-10-13T23:17:29.024037278Z" level=info msg="Starting container: a9da06268e166cc7bbb3283ba0c467ae5d738f271b8acf1a926511234fa8f03e" id=56bf1afa-3ff7-4200-8c30-fe9f20cfcd0d name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:17:29 embed-certs-505482 crio[652]: time="2025-10-13T23:17:29.027343592Z" level=info msg="Started container" PID=1641 containerID=a9da06268e166cc7bbb3283ba0c467ae5d738f271b8acf1a926511234fa8f03e description=kube-system/storage-provisioner/storage-provisioner id=56bf1afa-3ff7-4200-8c30-fe9f20cfcd0d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7182e0cded57327be22fc912c8f56f7871eac8a6d68984f9a5af6ec980bb892
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.576492832Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.582325068Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.582610519Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.582724962Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.589181834Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.589216122Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.589237521Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.593214598Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.593243406Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.593263245Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.596620651Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.596778342Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.596886771Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.60184759Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.601879106Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a9da06268e166       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   f7182e0cded57       storage-provisioner                          kube-system
	63b9c13139dbc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago       Exited              dashboard-metrics-scraper   2                   643951c7635de       dashboard-metrics-scraper-6ffb444bf9-5skjj   kubernetes-dashboard
	5f462a4795dc2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago       Running             kubernetes-dashboard        0                   27b2b90aee567       kubernetes-dashboard-855c9754f9-6dnwb        kubernetes-dashboard
	175dece2a6492       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   3d710c182abd9       busybox                                      default
	f4a1214c931c9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   bbffc1be43de5       coredns-66bc5c9577-6rtz5                     kube-system
	1de622fa96b2b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   f0d04249b7ce8       kube-proxy-n2g5d                             kube-system
	2db03c15b29f4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   f7182e0cded57       storage-provisioner                          kube-system
	d2eeb55a84126       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   bc69d255b7be6       kindnet-zf5h8                                kube-system
	964e0548ee889       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   c04bcca959558       etcd-embed-certs-505482                      kube-system
	116eb96f8d736       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f912e9956e838       kube-apiserver-embed-certs-505482            kube-system
	571a3921ae313       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   1d90043f20bde       kube-controller-manager-embed-certs-505482   kube-system
	dd86b0b8cf2e7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e5fe14ca864f5       kube-scheduler-embed-certs-505482            kube-system
	
	
	==> coredns [f4a1214c931c9defa831cf0eaeec82e7070c56e644d8b14c06ce8faf2632027b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51233 - 54537 "HINFO IN 5406872875660288081.5278178832335964055. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01274998s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-505482
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-505482
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=embed-certs-505482
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_15_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:15:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-505482
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:17:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:17:48 +0000   Mon, 13 Oct 2025 23:15:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:17:48 +0000   Mon, 13 Oct 2025 23:15:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:17:48 +0000   Mon, 13 Oct 2025 23:15:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:17:48 +0000   Mon, 13 Oct 2025 23:16:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-505482
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                19aef056-c1a4-490a-8aaa-19c46d6c5605
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-6rtz5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-embed-certs-505482                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-zf5h8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-embed-certs-505482             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-embed-certs-505482    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-n2g5d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-embed-certs-505482             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-5skjj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6dnwb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node embed-certs-505482 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node embed-certs-505482 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x8 over 2m38s)  kubelet          Node embed-certs-505482 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node embed-certs-505482 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node embed-certs-505482 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node embed-certs-505482 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m23s                  node-controller  Node embed-certs-505482 event: Registered Node embed-certs-505482 in Controller
	  Normal   NodeReady                100s                   kubelet          Node embed-certs-505482 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node embed-certs-505482 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node embed-certs-505482 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node embed-certs-505482 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node embed-certs-505482 event: Registered Node embed-certs-505482 in Controller
	
	
	==> dmesg <==
	[Oct13 22:54] overlayfs: idmapped layers are currently not supported
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	[Oct13 23:16] overlayfs: idmapped layers are currently not supported
	[ +36.293322] overlayfs: idmapped layers are currently not supported
	[Oct13 23:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [964e0548ee889c7cb00c0e33604118130c516ddd2211c9537910442a46e17ed5] <==
	{"level":"warn","ts":"2025-10-13T23:16:55.906624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:55.920198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:55.942087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:55.953579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:55.979753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:55.995543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.010615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.033943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.052011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.086627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.117262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.137148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.151958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.178847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.215740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.216737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.237440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.265809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.307824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.322686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.347013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.384843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.403980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.421176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.485922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:17:54 up  3:00,  0 user,  load average: 4.46, 3.62, 2.82
	Linux embed-certs-505482 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d2eeb55a841266881586a9e6bb16d8a862f1e4e7acc16d9ad2aa9d2515547900] <==
	I1013 23:16:58.403281       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:16:58.403587       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 23:16:58.403715       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:16:58.403727       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:16:58.403737       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:16:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:16:58.576026       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:16:58.576045       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:16:58.576055       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:16:58.603941       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:17:28.576777       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 23:17:28.577417       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 23:17:28.604470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:17:28.604470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1013 23:17:30.276194       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:17:30.276234       1 metrics.go:72] Registering metrics
	I1013 23:17:30.276299       1 controller.go:711] "Syncing nftables rules"
	I1013 23:17:38.576108       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 23:17:38.576171       1 main.go:301] handling current node
	I1013 23:17:48.578255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 23:17:48.578390       1 main.go:301] handling current node
	
	
	==> kube-apiserver [116eb96f8d736a4d212167c1ba57bf8044972f29d8801f70ffca6261a57399b3] <==
	I1013 23:16:57.589777       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 23:16:57.596039       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 23:16:57.596178       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 23:16:57.605416       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 23:16:57.607405       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 23:16:57.607477       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 23:16:57.608099       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 23:16:57.611718       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 23:16:57.612768       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 23:16:57.612863       1 aggregator.go:171] initial CRD sync complete...
	I1013 23:16:57.612871       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 23:16:57.612876       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 23:16:57.612883       1 cache.go:39] Caches are synced for autoregister controller
	E1013 23:16:57.702832       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:16:57.777554       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:16:58.103858       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:16:58.606863       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 23:16:58.712055       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:16:58.874641       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:16:58.980678       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:16:59.186590       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.46.243"}
	I1013 23:16:59.208008       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.143.160"}
	I1013 23:17:01.003324       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:17:01.254813       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 23:17:01.371256       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [571a3921ae313b746dc750f163cd023508f28ff3bf97977e5f8f7faab03157e7] <==
	I1013 23:17:00.761625       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 23:17:00.761650       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 23:17:00.765896       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 23:17:00.766290       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 23:17:00.772544       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 23:17:00.773674       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:17:00.773768       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:17:00.777213       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 23:17:00.782595       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:17:00.787824       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:17:00.789052       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 23:17:00.795935       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 23:17:00.795935       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 23:17:00.796974       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 23:17:00.797121       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-505482"
	I1013 23:17:00.797221       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 23:17:00.797002       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 23:17:00.797021       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:17:00.802543       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 23:17:00.809506       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 23:17:00.810662       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:17:00.810727       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 23:17:00.830586       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:17:00.830618       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 23:17:00.830629       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1de622fa96b2bb4766f5054c4bff72b46522d9894bb62e172bced8c9bfb56f38] <==
	I1013 23:16:59.084811       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:16:59.261713       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:16:59.364540       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:16:59.364848       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 23:16:59.364978       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:16:59.429052       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:16:59.429175       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:16:59.436473       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:16:59.436824       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:16:59.442991       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:16:59.444170       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:16:59.455003       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:16:59.444835       1 config.go:309] "Starting node config controller"
	I1013 23:16:59.455188       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:16:59.455218       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:16:59.445310       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:16:59.455282       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:16:59.445303       1 config.go:200] "Starting service config controller"
	I1013 23:16:59.455359       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:16:59.555657       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:16:59.555659       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:16:59.555768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dd86b0b8cf2e77ea5e9fb894aa6375e33bcdad7cd483eb155b4e5002125e49b7] <==
	I1013 23:16:55.760100       1 serving.go:386] Generated self-signed cert in-memory
	W1013 23:16:57.176962       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 23:16:57.177081       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 23:16:57.177131       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 23:16:57.177173       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 23:16:57.428917       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 23:16:57.428954       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:16:57.446209       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:16:57.446420       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:16:57.446436       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:16:57.446456       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 23:16:57.655311       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:17:01 embed-certs-505482 kubelet[777]: I1013 23:17:01.495550     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a22d237a-c2a5-46ab-805f-ae6fbea82083-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6dnwb\" (UID: \"a22d237a-c2a5-46ab-805f-ae6fbea82083\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6dnwb"
	Oct 13 23:17:01 embed-certs-505482 kubelet[777]: I1013 23:17:01.495575     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmwzr\" (UniqueName: \"kubernetes.io/projected/a22d237a-c2a5-46ab-805f-ae6fbea82083-kube-api-access-nmwzr\") pod \"kubernetes-dashboard-855c9754f9-6dnwb\" (UID: \"a22d237a-c2a5-46ab-805f-ae6fbea82083\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6dnwb"
	Oct 13 23:17:01 embed-certs-505482 kubelet[777]: W1013 23:17:01.723534     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/crio-27b2b90aee567b4187fe0a5860a3357449a81f8bbe3aa73fd365774163755a84 WatchSource:0}: Error finding container 27b2b90aee567b4187fe0a5860a3357449a81f8bbe3aa73fd365774163755a84: Status 404 returned error can't find the container with id 27b2b90aee567b4187fe0a5860a3357449a81f8bbe3aa73fd365774163755a84
	Oct 13 23:17:05 embed-certs-505482 kubelet[777]: I1013 23:17:05.861419     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 23:17:06 embed-certs-505482 kubelet[777]: I1013 23:17:06.900750     777 scope.go:117] "RemoveContainer" containerID="bceab25ea0060b1dee233e44f2e645942ce0df6aceef1d370ad02e64f2d1ad38"
	Oct 13 23:17:07 embed-certs-505482 kubelet[777]: I1013 23:17:07.908420     777 scope.go:117] "RemoveContainer" containerID="b7ff86402c6e7ed5531dbb1d98f8c5ad33bc12add39b1d247152a6b575103922"
	Oct 13 23:17:07 embed-certs-505482 kubelet[777]: E1013 23:17:07.908586     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:07 embed-certs-505482 kubelet[777]: I1013 23:17:07.910289     777 scope.go:117] "RemoveContainer" containerID="bceab25ea0060b1dee233e44f2e645942ce0df6aceef1d370ad02e64f2d1ad38"
	Oct 13 23:17:08 embed-certs-505482 kubelet[777]: I1013 23:17:08.912479     777 scope.go:117] "RemoveContainer" containerID="b7ff86402c6e7ed5531dbb1d98f8c5ad33bc12add39b1d247152a6b575103922"
	Oct 13 23:17:08 embed-certs-505482 kubelet[777]: E1013 23:17:08.912610     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:09 embed-certs-505482 kubelet[777]: I1013 23:17:09.914171     777 scope.go:117] "RemoveContainer" containerID="b7ff86402c6e7ed5531dbb1d98f8c5ad33bc12add39b1d247152a6b575103922"
	Oct 13 23:17:09 embed-certs-505482 kubelet[777]: E1013 23:17:09.914358     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:24 embed-certs-505482 kubelet[777]: I1013 23:17:24.759321     777 scope.go:117] "RemoveContainer" containerID="b7ff86402c6e7ed5531dbb1d98f8c5ad33bc12add39b1d247152a6b575103922"
	Oct 13 23:17:24 embed-certs-505482 kubelet[777]: I1013 23:17:24.962930     777 scope.go:117] "RemoveContainer" containerID="b7ff86402c6e7ed5531dbb1d98f8c5ad33bc12add39b1d247152a6b575103922"
	Oct 13 23:17:25 embed-certs-505482 kubelet[777]: I1013 23:17:25.966643     777 scope.go:117] "RemoveContainer" containerID="63b9c13139dbcd6c91af55e26b063d2ac5b5eae2e2e5be10588c9fe277923514"
	Oct 13 23:17:25 embed-certs-505482 kubelet[777]: E1013 23:17:25.967306     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:25 embed-certs-505482 kubelet[777]: I1013 23:17:25.987774     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6dnwb" podStartSLOduration=3.216194685 podStartE2EDuration="24.987739006s" podCreationTimestamp="2025-10-13 23:17:01 +0000 UTC" firstStartedPulling="2025-10-13 23:17:01.728019748 +0000 UTC m=+10.196346201" lastFinishedPulling="2025-10-13 23:17:23.499564069 +0000 UTC m=+31.967890522" observedRunningTime="2025-10-13 23:17:23.981258682 +0000 UTC m=+32.449585143" watchObservedRunningTime="2025-10-13 23:17:25.987739006 +0000 UTC m=+34.456065459"
	Oct 13 23:17:28 embed-certs-505482 kubelet[777]: I1013 23:17:28.974274     777 scope.go:117] "RemoveContainer" containerID="2db03c15b29f4470dc1af87e61bd98914b8a2d2e891887bb6cbb765ef7b8f52c"
	Oct 13 23:17:29 embed-certs-505482 kubelet[777]: I1013 23:17:29.569157     777 scope.go:117] "RemoveContainer" containerID="63b9c13139dbcd6c91af55e26b063d2ac5b5eae2e2e5be10588c9fe277923514"
	Oct 13 23:17:29 embed-certs-505482 kubelet[777]: E1013 23:17:29.569514     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:43 embed-certs-505482 kubelet[777]: I1013 23:17:43.761118     777 scope.go:117] "RemoveContainer" containerID="63b9c13139dbcd6c91af55e26b063d2ac5b5eae2e2e5be10588c9fe277923514"
	Oct 13 23:17:43 embed-certs-505482 kubelet[777]: E1013 23:17:43.761325     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:50 embed-certs-505482 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:17:50 embed-certs-505482 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:17:50 embed-certs-505482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5f462a4795dc27d43d5a62445569013d3c16f0e890b67a12d67306948c7749d7] <==
	2025/10/13 23:17:23 Using namespace: kubernetes-dashboard
	2025/10/13 23:17:23 Using in-cluster config to connect to apiserver
	2025/10/13 23:17:23 Using secret token for csrf signing
	2025/10/13 23:17:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 23:17:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 23:17:23 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 23:17:23 Generating JWE encryption key
	2025/10/13 23:17:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 23:17:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 23:17:24 Initializing JWE encryption key from synchronized object
	2025/10/13 23:17:24 Creating in-cluster Sidecar client
	2025/10/13 23:17:24 Serving insecurely on HTTP port: 9090
	2025/10/13 23:17:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 23:17:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 23:17:23 Starting overwatch
	
	
	==> storage-provisioner [2db03c15b29f4470dc1af87e61bd98914b8a2d2e891887bb6cbb765ef7b8f52c] <==
	I1013 23:16:58.320932       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 23:17:28.322733       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a9da06268e166cc7bbb3283ba0c467ae5d738f271b8acf1a926511234fa8f03e] <==
	I1013 23:17:29.066730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 23:17:29.088558       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 23:17:29.088688       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 23:17:29.093196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:32.548823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:36.810037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:40.409059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:43.462683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:46.486417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:46.493912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:17:46.494216       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 23:17:46.495795       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-505482_f7998cc2-2c45-41e3-a83f-54dbd38fe360!
	I1013 23:17:46.495898       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"37da86f2-4daf-4130-84ca-e44ec1613cc8", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-505482_f7998cc2-2c45-41e3-a83f-54dbd38fe360 became leader
	W1013 23:17:46.510260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:46.522869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:17:46.602332       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-505482_f7998cc2-2c45-41e3-a83f-54dbd38fe360!
	W1013 23:17:48.527653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:48.535781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:50.545664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:50.556444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:52.560459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:52.576011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:54.579308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:54.590316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-505482 -n embed-certs-505482
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-505482 -n embed-certs-505482: exit status 2 (549.869708ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-505482 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-505482
helpers_test.go:243: (dbg) docker inspect embed-certs-505482:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b",
	        "Created": "2025-10-13T23:14:55.44592554Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 624876,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:16:43.298339011Z",
	            "FinishedAt": "2025-10-13T23:16:42.219260272Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/hostname",
	        "HostsPath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/hosts",
	        "LogPath": "/var/lib/docker/containers/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b-json.log",
	        "Name": "/embed-certs-505482",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-505482:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-505482",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b",
	                "LowerDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5116eb3ee7844fab780a3ebbce3f8561967bc537a65c57f8ea501a3159223560/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-505482",
	                "Source": "/var/lib/docker/volumes/embed-certs-505482/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-505482",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-505482",
	                "name.minikube.sigs.k8s.io": "embed-certs-505482",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1985e18a843a7172918dca7b3cf26f0da0522f65f424def1131e02efefa659a4",
	            "SandboxKey": "/var/run/docker/netns/1985e18a843a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-505482": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:fc:a3:9f:05:12",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "23158782726c8cb4fc25349485432199b9ed3873182fa18e871d267e9c5dee9e",
	                    "EndpointID": "c3563e5a776a234d0df032614b0b148f42c996e83fe8aa0ba40ecd7cf151b219",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-505482",
	                        "a9accf0872e7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-505482 -n embed-certs-505482
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-505482 -n embed-certs-505482: exit status 2 (549.39565ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-505482 logs -n 25
E1013 23:17:56.777655  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-505482 logs -n 25: (1.567132648s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670275       │ jenkins │ v1.37.0 │ 13 Oct 25 23:13 UTC │ 13 Oct 25 23:13 UTC │
	│ image   │ old-k8s-version-670275 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670275       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ pause   │ -p old-k8s-version-670275 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670275       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │                     │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:15 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-896873       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p cert-expiration-896873                                                                                                                                                                                                                     │ cert-expiration-896873       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-985461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │                     │
	│ stop    │ -p no-preload-985461 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p no-preload-985461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	│ stop    │ -p embed-certs-505482 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-505482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:17 UTC │
	│ image   │ no-preload-985461 image list --format=json                                                                                                                                                                                                    │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p no-preload-985461 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p disable-driver-mounts-320520                                                                                                                                                                                                               │ disable-driver-mounts-320520 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ start   │ -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ image   │ embed-certs-505482 image list --format=json                                                                                                                                                                                                   │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p embed-certs-505482 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:17:22
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:17:22.597368  628422 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:17:22.597589  628422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:17:22.597622  628422 out.go:374] Setting ErrFile to fd 2...
	I1013 23:17:22.597643  628422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:17:22.597927  628422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:17:22.598360  628422 out.go:368] Setting JSON to false
	I1013 23:17:22.599371  628422 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10779,"bootTime":1760386664,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:17:22.599467  628422 start.go:141] virtualization:  
	I1013 23:17:22.606120  628422 out.go:179] * [default-k8s-diff-port-033746] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:17:22.609606  628422 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:17:22.609680  628422 notify.go:220] Checking for updates...
	I1013 23:17:22.616422  628422 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:17:22.619666  628422 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:17:22.622801  628422 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:17:22.627294  628422 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:17:22.630626  628422 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:17:22.634216  628422 config.go:182] Loaded profile config "embed-certs-505482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:17:22.634413  628422 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:17:22.672383  628422 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:17:22.672517  628422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:17:22.765973  628422 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:17:22.756381382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:17:22.766075  628422 docker.go:318] overlay module found
	I1013 23:17:22.770073  628422 out.go:179] * Using the docker driver based on user configuration
	I1013 23:17:22.773315  628422 start.go:305] selected driver: docker
	I1013 23:17:22.773334  628422 start.go:925] validating driver "docker" against <nil>
	I1013 23:17:22.773346  628422 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:17:22.774037  628422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:17:22.891535  628422 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:17:22.874497635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:17:22.891707  628422 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 23:17:22.891957  628422 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:17:22.896850  628422 out.go:179] * Using Docker driver with root privileges
	I1013 23:17:22.899855  628422 cni.go:84] Creating CNI manager for ""
	I1013 23:17:22.899928  628422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:17:22.899946  628422 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 23:17:22.900039  628422 start.go:349] cluster config:
	{Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:17:22.903616  628422 out.go:179] * Starting "default-k8s-diff-port-033746" primary control-plane node in "default-k8s-diff-port-033746" cluster
	I1013 23:17:22.906569  628422 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:17:22.909600  628422 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:17:22.912699  628422 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:17:22.912751  628422 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:17:22.912761  628422 cache.go:58] Caching tarball of preloaded images
	I1013 23:17:22.912777  628422 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:17:22.912838  628422 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:17:22.912847  628422 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:17:22.912958  628422 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/config.json ...
	I1013 23:17:22.912975  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/config.json: {Name:mk8b5f27a831e52eb3ac20cd660bcee717949d3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:22.940205  628422 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:17:22.940242  628422 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:17:22.940256  628422 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:17:22.940279  628422 start.go:360] acquireMachinesLock for default-k8s-diff-port-033746: {Name:mk4950372c3cd6b03a758b4772e5c43a69d20962 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:17:22.940582  628422 start.go:364] duration metric: took 191.718µs to acquireMachinesLock for "default-k8s-diff-port-033746"
	I1013 23:17:22.940620  628422 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:17:22.940720  628422 start.go:125] createHost starting for "" (driver="docker")
	W1013 23:17:20.273727  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:22.280587  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	I1013 23:17:22.946588  628422 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 23:17:22.946850  628422 start.go:159] libmachine.API.Create for "default-k8s-diff-port-033746" (driver="docker")
	I1013 23:17:22.946907  628422 client.go:168] LocalClient.Create starting
	I1013 23:17:22.946990  628422 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem
	I1013 23:17:22.947028  628422 main.go:141] libmachine: Decoding PEM data...
	I1013 23:17:22.947046  628422 main.go:141] libmachine: Parsing certificate...
	I1013 23:17:22.947248  628422 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem
	I1013 23:17:22.947295  628422 main.go:141] libmachine: Decoding PEM data...
	I1013 23:17:22.947315  628422 main.go:141] libmachine: Parsing certificate...
	I1013 23:17:22.947680  628422 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-033746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 23:17:22.979733  628422 cli_runner.go:211] docker network inspect default-k8s-diff-port-033746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 23:17:22.979827  628422 network_create.go:284] running [docker network inspect default-k8s-diff-port-033746] to gather additional debugging logs...
	I1013 23:17:22.979845  628422 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-033746
	W1013 23:17:23.009760  628422 cli_runner.go:211] docker network inspect default-k8s-diff-port-033746 returned with exit code 1
	I1013 23:17:23.009797  628422 network_create.go:287] error running [docker network inspect default-k8s-diff-port-033746]: docker network inspect default-k8s-diff-port-033746: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-033746 not found
	I1013 23:17:23.009812  628422 network_create.go:289] output of [docker network inspect default-k8s-diff-port-033746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-033746 not found
	
	** /stderr **
	I1013 23:17:23.009938  628422 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:17:23.029532  628422 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-daf8f67114ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:2a:b3:49:6d:63} reservation:<nil>}
	I1013 23:17:23.029783  628422 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-57d99f1e9609 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:17:72:4c:c8:ba} reservation:<nil>}
	I1013 23:17:23.030158  628422 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-33ec4a6ec514 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:b6:7d:bc:fd} reservation:<nil>}
	I1013 23:17:23.030560  628422 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-23158782726c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:3f:b9:82:32:ff} reservation:<nil>}
	I1013 23:17:23.031165  628422 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019689b0}
	I1013 23:17:23.031194  628422 network_create.go:124] attempt to create docker network default-k8s-diff-port-033746 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 23:17:23.031271  628422 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-033746 default-k8s-diff-port-033746
	I1013 23:17:23.124512  628422 network_create.go:108] docker network default-k8s-diff-port-033746 192.168.85.0/24 created
	I1013 23:17:23.124550  628422 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-033746" container
	I1013 23:17:23.124644  628422 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 23:17:23.154654  628422 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-033746 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-033746 --label created_by.minikube.sigs.k8s.io=true
	I1013 23:17:23.174994  628422 oci.go:103] Successfully created a docker volume default-k8s-diff-port-033746
	I1013 23:17:23.175194  628422 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-033746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-033746 --entrypoint /usr/bin/test -v default-k8s-diff-port-033746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 23:17:24.171446  628422 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-033746
	I1013 23:17:24.171497  628422 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:17:24.171517  628422 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 23:17:24.171587  628422 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-033746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	W1013 23:17:24.782212  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:27.273515  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	I1013 23:17:28.627779  628422 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-033746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.456138737s)
	I1013 23:17:28.627816  628422 kic.go:203] duration metric: took 4.45629601s to extract preloaded images to volume ...
	W1013 23:17:28.627943  628422 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 23:17:28.628062  628422 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 23:17:28.681355  628422 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-033746 --name default-k8s-diff-port-033746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-033746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-033746 --network default-k8s-diff-port-033746 --ip 192.168.85.2 --volume default-k8s-diff-port-033746:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 23:17:29.053428  628422 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Running}}
	I1013 23:17:29.077200  628422 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:17:29.105211  628422 cli_runner.go:164] Run: docker exec default-k8s-diff-port-033746 stat /var/lib/dpkg/alternatives/iptables
	I1013 23:17:29.163007  628422 oci.go:144] the created container "default-k8s-diff-port-033746" has a running status.
	I1013 23:17:29.163043  628422 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa...
	I1013 23:17:29.470708  628422 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 23:17:29.497454  628422 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:17:29.525078  628422 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 23:17:29.525106  628422 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-033746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 23:17:29.602760  628422 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:17:29.629401  628422 machine.go:93] provisionDockerMachine start ...
	I1013 23:17:29.629509  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:29.656490  628422 main.go:141] libmachine: Using SSH client type: native
	I1013 23:17:29.656834  628422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1013 23:17:29.656852  628422 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:17:29.657536  628422 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1013 23:17:29.273956  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	W1013 23:17:31.773018  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	I1013 23:17:32.803208  628422 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-033746
	
	I1013 23:17:32.803233  628422 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-033746"
	I1013 23:17:32.803293  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:32.821033  628422 main.go:141] libmachine: Using SSH client type: native
	I1013 23:17:32.821357  628422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1013 23:17:32.821375  628422 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-033746 && echo "default-k8s-diff-port-033746" | sudo tee /etc/hostname
	I1013 23:17:32.987605  628422 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-033746
	
	I1013 23:17:32.987778  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:33.010463  628422 main.go:141] libmachine: Using SSH client type: native
	I1013 23:17:33.010963  628422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1013 23:17:33.010998  628422 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-033746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-033746/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-033746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:17:33.159410  628422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:17:33.159448  628422 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:17:33.159518  628422 ubuntu.go:190] setting up certificates
	I1013 23:17:33.159530  628422 provision.go:84] configureAuth start
	I1013 23:17:33.159616  628422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:17:33.176520  628422 provision.go:143] copyHostCerts
	I1013 23:17:33.176607  628422 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:17:33.176623  628422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:17:33.176700  628422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:17:33.176791  628422 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:17:33.176802  628422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:17:33.176829  628422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:17:33.176885  628422 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:17:33.176895  628422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:17:33.176919  628422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:17:33.176971  628422 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-033746 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-033746 localhost minikube]
	I1013 23:17:34.217083  628422 provision.go:177] copyRemoteCerts
	I1013 23:17:34.217153  628422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:17:34.217201  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:34.234756  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:17:34.339185  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 23:17:34.357620  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:17:34.378788  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:17:34.398246  628422 provision.go:87] duration metric: took 1.238686044s to configureAuth
	I1013 23:17:34.398276  628422 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:17:34.398471  628422 config.go:182] Loaded profile config "default-k8s-diff-port-033746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:17:34.398599  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:34.415905  628422 main.go:141] libmachine: Using SSH client type: native
	I1013 23:17:34.416261  628422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1013 23:17:34.416282  628422 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:17:34.762622  628422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:17:34.762647  628422 machine.go:96] duration metric: took 5.133223131s to provisionDockerMachine
	I1013 23:17:34.762674  628422 client.go:171] duration metric: took 11.815738564s to LocalClient.Create
	I1013 23:17:34.762690  628422 start.go:167] duration metric: took 11.815842078s to libmachine.API.Create "default-k8s-diff-port-033746"
	I1013 23:17:34.762701  628422 start.go:293] postStartSetup for "default-k8s-diff-port-033746" (driver="docker")
	I1013 23:17:34.762714  628422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:17:34.762790  628422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:17:34.762848  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:34.781135  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:17:34.883383  628422 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:17:34.886525  628422 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:17:34.886551  628422 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:17:34.886561  628422 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:17:34.886621  628422 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:17:34.886698  628422 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:17:34.886803  628422 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:17:34.899661  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:17:34.921786  628422 start.go:296] duration metric: took 159.06412ms for postStartSetup
	I1013 23:17:34.922186  628422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:17:34.939038  628422 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/config.json ...
	I1013 23:17:34.939366  628422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:17:34.939424  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:34.957087  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:17:35.060077  628422 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:17:35.064631  628422 start.go:128] duration metric: took 12.123873733s to createHost
	I1013 23:17:35.064657  628422 start.go:83] releasing machines lock for "default-k8s-diff-port-033746", held for 12.124056826s
	I1013 23:17:35.064741  628422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:17:35.083257  628422 ssh_runner.go:195] Run: cat /version.json
	I1013 23:17:35.083297  628422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:17:35.083335  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:35.083386  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:17:35.106529  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:17:35.125668  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:17:35.303020  628422 ssh_runner.go:195] Run: systemctl --version
	I1013 23:17:35.309553  628422 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:17:35.353541  628422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:17:35.358039  628422 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:17:35.358143  628422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:17:35.387672  628422 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 23:17:35.387694  628422 start.go:495] detecting cgroup driver to use...
	I1013 23:17:35.387727  628422 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:17:35.387779  628422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:17:35.405407  628422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:17:35.418533  628422 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:17:35.418640  628422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:17:35.436293  628422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:17:35.454316  628422 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:17:35.582877  628422 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:17:35.713194  628422 docker.go:234] disabling docker service ...
	I1013 23:17:35.713286  628422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:17:35.737565  628422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:17:35.753883  628422 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:17:35.905536  628422 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:17:36.040460  628422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:17:36.055490  628422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:17:36.070769  628422 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:17:36.070887  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.080392  628422 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:17:36.080506  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.089831  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.100068  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.109760  628422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:17:36.118416  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.127590  628422 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.141532  628422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:17:36.150696  628422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:17:36.158466  628422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:17:36.166171  628422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:17:36.298507  628422 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:17:36.426153  628422 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:17:36.426273  628422 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:17:36.430303  628422 start.go:563] Will wait 60s for crictl version
	I1013 23:17:36.430416  628422 ssh_runner.go:195] Run: which crictl
	I1013 23:17:36.434036  628422 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:17:36.462807  628422 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:17:36.462943  628422 ssh_runner.go:195] Run: crio --version
	I1013 23:17:36.499756  628422 ssh_runner.go:195] Run: crio --version
	I1013 23:17:36.538809  628422 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:17:36.541803  628422 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-033746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:17:36.560551  628422 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 23:17:36.564367  628422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:17:36.574114  628422 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:17:36.574232  628422 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:17:36.574304  628422 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:17:36.609381  628422 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:17:36.609407  628422 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:17:36.609463  628422 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:17:36.636054  628422 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:17:36.636076  628422 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:17:36.636084  628422 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1013 23:17:36.636200  628422 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-033746 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:17:36.636287  628422 ssh_runner.go:195] Run: crio config
	I1013 23:17:36.697799  628422 cni.go:84] Creating CNI manager for ""
	I1013 23:17:36.697824  628422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:17:36.697848  628422 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:17:36.697873  628422 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-033746 NodeName:default-k8s-diff-port-033746 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:17:36.698003  628422 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-033746"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:17:36.698080  628422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:17:36.706991  628422 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:17:36.707107  628422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:17:36.714811  628422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1013 23:17:36.727812  628422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:17:36.741237  628422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1013 23:17:36.755302  628422 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:17:36.759021  628422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:17:36.769467  628422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:17:36.904306  628422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:17:36.928371  628422 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746 for IP: 192.168.85.2
	I1013 23:17:36.928395  628422 certs.go:195] generating shared ca certs ...
	I1013 23:17:36.928412  628422 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:36.929075  628422 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:17:36.929168  628422 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:17:36.929203  628422 certs.go:257] generating profile certs ...
	I1013 23:17:36.929287  628422 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.key
	I1013 23:17:36.929335  628422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt with IP's: []
	I1013 23:17:37.513159  628422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt ...
	I1013 23:17:37.513195  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: {Name:mk3f1999a47229872bbe82dd21e503d52cdc2f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:37.513404  628422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.key ...
	I1013 23:17:37.513423  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.key: {Name:mke849aae776be0331a4f2b5a2d673fd823b3bb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:37.513525  628422 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key.5040eb68
	I1013 23:17:37.513546  628422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt.5040eb68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	W1013 23:17:33.773776  624746 pod_ready.go:104] pod "coredns-66bc5c9577-6rtz5" is not "Ready", error: <nil>
	I1013 23:17:36.273782  624746 pod_ready.go:94] pod "coredns-66bc5c9577-6rtz5" is "Ready"
	I1013 23:17:36.273807  624746 pod_ready.go:86] duration metric: took 36.506776206s for pod "coredns-66bc5c9577-6rtz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.277428  624746 pod_ready.go:83] waiting for pod "etcd-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.284287  624746 pod_ready.go:94] pod "etcd-embed-certs-505482" is "Ready"
	I1013 23:17:36.284310  624746 pod_ready.go:86] duration metric: took 6.860392ms for pod "etcd-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.287565  624746 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.294345  624746 pod_ready.go:94] pod "kube-apiserver-embed-certs-505482" is "Ready"
	I1013 23:17:36.294374  624746 pod_ready.go:86] duration metric: took 6.785045ms for pod "kube-apiserver-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.299134  624746 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.472070  624746 pod_ready.go:94] pod "kube-controller-manager-embed-certs-505482" is "Ready"
	I1013 23:17:36.472109  624746 pod_ready.go:86] duration metric: took 172.911942ms for pod "kube-controller-manager-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:36.671667  624746 pod_ready.go:83] waiting for pod "kube-proxy-n2g5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:37.071824  624746 pod_ready.go:94] pod "kube-proxy-n2g5d" is "Ready"
	I1013 23:17:37.071850  624746 pod_ready.go:86] duration metric: took 400.15411ms for pod "kube-proxy-n2g5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:37.272124  624746 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:37.672252  624746 pod_ready.go:94] pod "kube-scheduler-embed-certs-505482" is "Ready"
	I1013 23:17:37.672292  624746 pod_ready.go:86] duration metric: took 400.13903ms for pod "kube-scheduler-embed-certs-505482" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:17:37.672308  624746 pod_ready.go:40] duration metric: took 37.910206682s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:17:37.785181  624746 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:17:37.788351  624746 out.go:179] * Done! kubectl is now configured to use "embed-certs-505482" cluster and "default" namespace by default
	I1013 23:17:38.059638  628422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt.5040eb68 ...
	I1013 23:17:38.059711  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt.5040eb68: {Name:mkcbf8423ae15290a421453eb7ff40a11672752d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:38.059903  628422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key.5040eb68 ...
	I1013 23:17:38.059922  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key.5040eb68: {Name:mk29a46bfc7ce983dca37486b1f6be13f5b3bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:38.060002  628422 certs.go:382] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt.5040eb68 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt
	I1013 23:17:38.060094  628422 certs.go:386] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key.5040eb68 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key
	I1013 23:17:38.060157  628422 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key
	I1013 23:17:38.060182  628422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.crt with IP's: []
	I1013 23:17:38.543335  628422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.crt ...
	I1013 23:17:38.543409  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.crt: {Name:mkb2fea196c26470f972e26befe5d1ac05b7493a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:38.543644  628422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key ...
	I1013 23:17:38.543688  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key: {Name:mkc8a51b77e58fa63683950881065ff746ff3c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:17:38.545430  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:17:38.545515  628422 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:17:38.545542  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:17:38.545589  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:17:38.545634  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:17:38.545686  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:17:38.545756  628422 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:17:38.546394  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:17:38.565504  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:17:38.592382  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:17:38.617151  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:17:38.636472  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 23:17:38.655686  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 23:17:38.673813  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:17:38.694847  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 23:17:38.716149  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:17:38.734265  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:17:38.753205  628422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:17:38.774690  628422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:17:38.787344  628422 ssh_runner.go:195] Run: openssl version
	I1013 23:17:38.793647  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:17:38.802210  628422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:17:38.806052  628422 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:17:38.806151  628422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:17:38.847919  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:17:38.857393  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:17:38.865879  628422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:17:38.870036  628422 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:17:38.870148  628422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:17:38.914381  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:17:38.922855  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:17:38.931402  628422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:17:38.935437  628422 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:17:38.935535  628422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:17:38.976421  628422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:17:38.985209  628422 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:17:38.988942  628422 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 23:17:38.989032  628422 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:17:38.989120  628422 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:17:38.989188  628422 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:17:39.029932  628422 cri.go:89] found id: ""
	I1013 23:17:39.030014  628422 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:17:39.038543  628422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 23:17:39.046924  628422 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 23:17:39.046991  628422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 23:17:39.055562  628422 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 23:17:39.055637  628422 kubeadm.go:157] found existing configuration files:
	
	I1013 23:17:39.055713  628422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1013 23:17:39.063907  628422 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 23:17:39.064022  628422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 23:17:39.071813  628422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1013 23:17:39.079537  628422 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 23:17:39.079622  628422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 23:17:39.087245  628422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1013 23:17:39.095378  628422 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 23:17:39.095441  628422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 23:17:39.103804  628422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1013 23:17:39.111784  628422 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 23:17:39.111854  628422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 23:17:39.120041  628422 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 23:17:39.159276  628422 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 23:17:39.159642  628422 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 23:17:39.194920  628422 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 23:17:39.194999  628422 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 23:17:39.195049  628422 kubeadm.go:318] OS: Linux
	I1013 23:17:39.195136  628422 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 23:17:39.195192  628422 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 23:17:39.195246  628422 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 23:17:39.195301  628422 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 23:17:39.195355  628422 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 23:17:39.195417  628422 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 23:17:39.195469  628422 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 23:17:39.195523  628422 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 23:17:39.195575  628422 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 23:17:39.276616  628422 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 23:17:39.276737  628422 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 23:17:39.276837  628422 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 23:17:39.291624  628422 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 23:17:39.295346  628422 out.go:252]   - Generating certificates and keys ...
	I1013 23:17:39.295475  628422 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 23:17:39.295564  628422 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 23:17:39.501127  628422 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 23:17:40.383927  628422 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 23:17:41.411937  628422 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 23:17:42.769042  628422 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 23:17:43.068915  628422 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 23:17:43.069373  628422 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-033746 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 23:17:43.326398  628422 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 23:17:43.326816  628422 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-033746 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 23:17:44.836889  628422 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 23:17:45.338367  628422 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 23:17:45.492945  628422 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 23:17:45.493527  628422 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 23:17:46.173043  628422 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 23:17:46.532979  628422 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 23:17:46.795696  628422 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 23:17:47.031979  628422 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 23:17:47.556896  628422 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 23:17:47.557811  628422 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 23:17:47.562940  628422 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 23:17:47.566703  628422 out.go:252]   - Booting up control plane ...
	I1013 23:17:47.566809  628422 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 23:17:47.566890  628422 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 23:17:47.566960  628422 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 23:17:47.583603  628422 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 23:17:47.583733  628422 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 23:17:47.592020  628422 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 23:17:47.592548  628422 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 23:17:47.592626  628422 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 23:17:47.737185  628422 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 23:17:47.737324  628422 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 23:17:48.742709  628422 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001976248s
	I1013 23:17:48.742825  628422 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 23:17:48.742915  628422 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1013 23:17:48.743012  628422 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 23:17:48.743120  628422 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.97603728Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f29eb93a-d2f9-4dff-9baf-6868cdc37de3 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.977015058Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=603bd5ab-2edb-4c23-931f-770eb588d2ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.977261642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.986560786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.986727995Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cc06a14c26a098050d2b1e6cc6e824dd48d029a6cccc0241c4ee8adc9b4dc80b/merged/etc/passwd: no such file or directory"
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.986747901Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cc06a14c26a098050d2b1e6cc6e824dd48d029a6cccc0241c4ee8adc9b4dc80b/merged/etc/group: no such file or directory"
	Oct 13 23:17:28 embed-certs-505482 crio[652]: time="2025-10-13T23:17:28.986980946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:17:29 embed-certs-505482 crio[652]: time="2025-10-13T23:17:29.022877293Z" level=info msg="Created container a9da06268e166cc7bbb3283ba0c467ae5d738f271b8acf1a926511234fa8f03e: kube-system/storage-provisioner/storage-provisioner" id=603bd5ab-2edb-4c23-931f-770eb588d2ad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:17:29 embed-certs-505482 crio[652]: time="2025-10-13T23:17:29.024037278Z" level=info msg="Starting container: a9da06268e166cc7bbb3283ba0c467ae5d738f271b8acf1a926511234fa8f03e" id=56bf1afa-3ff7-4200-8c30-fe9f20cfcd0d name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:17:29 embed-certs-505482 crio[652]: time="2025-10-13T23:17:29.027343592Z" level=info msg="Started container" PID=1641 containerID=a9da06268e166cc7bbb3283ba0c467ae5d738f271b8acf1a926511234fa8f03e description=kube-system/storage-provisioner/storage-provisioner id=56bf1afa-3ff7-4200-8c30-fe9f20cfcd0d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7182e0cded57327be22fc912c8f56f7871eac8a6d68984f9a5af6ec980bb892
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.576492832Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.582325068Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.582610519Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.582724962Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.589181834Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.589216122Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.589237521Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.593214598Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.593243406Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.593263245Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.596620651Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.596778342Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.596886771Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.60184759Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 23:17:38 embed-certs-505482 crio[652]: time="2025-10-13T23:17:38.601879106Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a9da06268e166       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   f7182e0cded57       storage-provisioner                          kube-system
	63b9c13139dbc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   643951c7635de       dashboard-metrics-scraper-6ffb444bf9-5skjj   kubernetes-dashboard
	5f462a4795dc2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   27b2b90aee567       kubernetes-dashboard-855c9754f9-6dnwb        kubernetes-dashboard
	175dece2a6492       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   3d710c182abd9       busybox                                      default
	f4a1214c931c9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   bbffc1be43de5       coredns-66bc5c9577-6rtz5                     kube-system
	1de622fa96b2b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   f0d04249b7ce8       kube-proxy-n2g5d                             kube-system
	2db03c15b29f4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   f7182e0cded57       storage-provisioner                          kube-system
	d2eeb55a84126       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   bc69d255b7be6       kindnet-zf5h8                                kube-system
	964e0548ee889       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   c04bcca959558       etcd-embed-certs-505482                      kube-system
	116eb96f8d736       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f912e9956e838       kube-apiserver-embed-certs-505482            kube-system
	571a3921ae313       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   1d90043f20bde       kube-controller-manager-embed-certs-505482   kube-system
	dd86b0b8cf2e7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e5fe14ca864f5       kube-scheduler-embed-certs-505482            kube-system
	
	
	==> coredns [f4a1214c931c9defa831cf0eaeec82e7070c56e644d8b14c06ce8faf2632027b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51233 - 54537 "HINFO IN 5406872875660288081.5278178832335964055. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01274998s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-505482
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-505482
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=embed-certs-505482
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_15_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:15:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-505482
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:17:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:17:48 +0000   Mon, 13 Oct 2025 23:15:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:17:48 +0000   Mon, 13 Oct 2025 23:15:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:17:48 +0000   Mon, 13 Oct 2025 23:15:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:17:48 +0000   Mon, 13 Oct 2025 23:16:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-505482
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                19aef056-c1a4-490a-8aaa-19c46d6c5605
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-6rtz5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-embed-certs-505482                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-zf5h8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-embed-certs-505482             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-controller-manager-embed-certs-505482    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-n2g5d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-embed-certs-505482             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-5skjj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6dnwb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m41s (x8 over 2m41s)  kubelet          Node embed-certs-505482 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m41s (x8 over 2m41s)  kubelet          Node embed-certs-505482 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m41s (x8 over 2m41s)  kubelet          Node embed-certs-505482 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node embed-certs-505482 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node embed-certs-505482 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node embed-certs-505482 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m26s                  node-controller  Node embed-certs-505482 event: Registered Node embed-certs-505482 in Controller
	  Normal   NodeReady                103s                   kubelet          Node embed-certs-505482 status is now: NodeReady
	  Normal   Starting                 66s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node embed-certs-505482 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node embed-certs-505482 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node embed-certs-505482 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                    node-controller  Node embed-certs-505482 event: Registered Node embed-certs-505482 in Controller
	
	
	==> dmesg <==
	[Oct13 22:54] overlayfs: idmapped layers are currently not supported
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	[Oct13 23:16] overlayfs: idmapped layers are currently not supported
	[ +36.293322] overlayfs: idmapped layers are currently not supported
	[Oct13 23:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [964e0548ee889c7cb00c0e33604118130c516ddd2211c9537910442a46e17ed5] <==
	{"level":"warn","ts":"2025-10-13T23:16:55.906624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:55.920198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:55.942087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:55.953579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:55.979753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:55.995543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.010615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.033943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.052011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.086627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.117262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.137148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.151958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.178847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.215740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.216737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.237440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.265809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.307824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.322686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.347013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.384843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.403980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.421176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:16:56.485922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:17:57 up  3:00,  0 user,  load average: 4.46, 3.62, 2.82
	Linux embed-certs-505482 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d2eeb55a841266881586a9e6bb16d8a862f1e4e7acc16d9ad2aa9d2515547900] <==
	I1013 23:16:58.403281       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:16:58.403587       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 23:16:58.403715       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:16:58.403727       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:16:58.403737       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:16:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:16:58.576026       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:16:58.576045       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:16:58.576055       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:16:58.603941       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:17:28.576777       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 23:17:28.577417       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 23:17:28.604470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:17:28.604470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1013 23:17:30.276194       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:17:30.276234       1 metrics.go:72] Registering metrics
	I1013 23:17:30.276299       1 controller.go:711] "Syncing nftables rules"
	I1013 23:17:38.576108       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 23:17:38.576171       1 main.go:301] handling current node
	I1013 23:17:48.578255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 23:17:48.578390       1 main.go:301] handling current node
	
	
	==> kube-apiserver [116eb96f8d736a4d212167c1ba57bf8044972f29d8801f70ffca6261a57399b3] <==
	I1013 23:16:57.589777       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 23:16:57.596039       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 23:16:57.596178       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 23:16:57.605416       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 23:16:57.607405       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 23:16:57.607477       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 23:16:57.608099       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 23:16:57.611718       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 23:16:57.612768       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 23:16:57.612863       1 aggregator.go:171] initial CRD sync complete...
	I1013 23:16:57.612871       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 23:16:57.612876       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 23:16:57.612883       1 cache.go:39] Caches are synced for autoregister controller
	E1013 23:16:57.702832       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:16:57.777554       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:16:58.103858       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:16:58.606863       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 23:16:58.712055       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:16:58.874641       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:16:58.980678       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:16:59.186590       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.46.243"}
	I1013 23:16:59.208008       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.143.160"}
	I1013 23:17:01.003324       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:17:01.254813       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 23:17:01.371256       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [571a3921ae313b746dc750f163cd023508f28ff3bf97977e5f8f7faab03157e7] <==
	I1013 23:17:00.761625       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 23:17:00.761650       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 23:17:00.765896       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 23:17:00.766290       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 23:17:00.772544       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 23:17:00.773674       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:17:00.773768       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:17:00.777213       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 23:17:00.782595       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:17:00.787824       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:17:00.789052       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 23:17:00.795935       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 23:17:00.795935       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 23:17:00.796974       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 23:17:00.797121       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-505482"
	I1013 23:17:00.797221       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 23:17:00.797002       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 23:17:00.797021       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:17:00.802543       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 23:17:00.809506       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 23:17:00.810662       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:17:00.810727       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 23:17:00.830586       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:17:00.830618       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 23:17:00.830629       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1de622fa96b2bb4766f5054c4bff72b46522d9894bb62e172bced8c9bfb56f38] <==
	I1013 23:16:59.084811       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:16:59.261713       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:16:59.364540       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:16:59.364848       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 23:16:59.364978       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:16:59.429052       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:16:59.429175       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:16:59.436473       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:16:59.436824       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:16:59.442991       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:16:59.444170       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:16:59.455003       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:16:59.444835       1 config.go:309] "Starting node config controller"
	I1013 23:16:59.455188       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:16:59.455218       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:16:59.445310       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:16:59.455282       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:16:59.445303       1 config.go:200] "Starting service config controller"
	I1013 23:16:59.455359       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:16:59.555657       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:16:59.555659       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:16:59.555768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dd86b0b8cf2e77ea5e9fb894aa6375e33bcdad7cd483eb155b4e5002125e49b7] <==
	I1013 23:16:55.760100       1 serving.go:386] Generated self-signed cert in-memory
	W1013 23:16:57.176962       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 23:16:57.177081       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 23:16:57.177131       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 23:16:57.177173       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 23:16:57.428917       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 23:16:57.428954       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:16:57.446209       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:16:57.446420       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:16:57.446436       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:16:57.446456       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 23:16:57.655311       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:17:01 embed-certs-505482 kubelet[777]: I1013 23:17:01.495550     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a22d237a-c2a5-46ab-805f-ae6fbea82083-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6dnwb\" (UID: \"a22d237a-c2a5-46ab-805f-ae6fbea82083\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6dnwb"
	Oct 13 23:17:01 embed-certs-505482 kubelet[777]: I1013 23:17:01.495575     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmwzr\" (UniqueName: \"kubernetes.io/projected/a22d237a-c2a5-46ab-805f-ae6fbea82083-kube-api-access-nmwzr\") pod \"kubernetes-dashboard-855c9754f9-6dnwb\" (UID: \"a22d237a-c2a5-46ab-805f-ae6fbea82083\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6dnwb"
	Oct 13 23:17:01 embed-certs-505482 kubelet[777]: W1013 23:17:01.723534     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a9accf0872e7f4d8b40c00b54deb5e4d1697cf60c7d81b695f884d370be86d6b/crio-27b2b90aee567b4187fe0a5860a3357449a81f8bbe3aa73fd365774163755a84 WatchSource:0}: Error finding container 27b2b90aee567b4187fe0a5860a3357449a81f8bbe3aa73fd365774163755a84: Status 404 returned error can't find the container with id 27b2b90aee567b4187fe0a5860a3357449a81f8bbe3aa73fd365774163755a84
	Oct 13 23:17:05 embed-certs-505482 kubelet[777]: I1013 23:17:05.861419     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 23:17:06 embed-certs-505482 kubelet[777]: I1013 23:17:06.900750     777 scope.go:117] "RemoveContainer" containerID="bceab25ea0060b1dee233e44f2e645942ce0df6aceef1d370ad02e64f2d1ad38"
	Oct 13 23:17:07 embed-certs-505482 kubelet[777]: I1013 23:17:07.908420     777 scope.go:117] "RemoveContainer" containerID="b7ff86402c6e7ed5531dbb1d98f8c5ad33bc12add39b1d247152a6b575103922"
	Oct 13 23:17:07 embed-certs-505482 kubelet[777]: E1013 23:17:07.908586     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:07 embed-certs-505482 kubelet[777]: I1013 23:17:07.910289     777 scope.go:117] "RemoveContainer" containerID="bceab25ea0060b1dee233e44f2e645942ce0df6aceef1d370ad02e64f2d1ad38"
	Oct 13 23:17:08 embed-certs-505482 kubelet[777]: I1013 23:17:08.912479     777 scope.go:117] "RemoveContainer" containerID="b7ff86402c6e7ed5531dbb1d98f8c5ad33bc12add39b1d247152a6b575103922"
	Oct 13 23:17:08 embed-certs-505482 kubelet[777]: E1013 23:17:08.912610     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:09 embed-certs-505482 kubelet[777]: I1013 23:17:09.914171     777 scope.go:117] "RemoveContainer" containerID="b7ff86402c6e7ed5531dbb1d98f8c5ad33bc12add39b1d247152a6b575103922"
	Oct 13 23:17:09 embed-certs-505482 kubelet[777]: E1013 23:17:09.914358     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:24 embed-certs-505482 kubelet[777]: I1013 23:17:24.759321     777 scope.go:117] "RemoveContainer" containerID="b7ff86402c6e7ed5531dbb1d98f8c5ad33bc12add39b1d247152a6b575103922"
	Oct 13 23:17:24 embed-certs-505482 kubelet[777]: I1013 23:17:24.962930     777 scope.go:117] "RemoveContainer" containerID="b7ff86402c6e7ed5531dbb1d98f8c5ad33bc12add39b1d247152a6b575103922"
	Oct 13 23:17:25 embed-certs-505482 kubelet[777]: I1013 23:17:25.966643     777 scope.go:117] "RemoveContainer" containerID="63b9c13139dbcd6c91af55e26b063d2ac5b5eae2e2e5be10588c9fe277923514"
	Oct 13 23:17:25 embed-certs-505482 kubelet[777]: E1013 23:17:25.967306     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:25 embed-certs-505482 kubelet[777]: I1013 23:17:25.987774     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6dnwb" podStartSLOduration=3.216194685 podStartE2EDuration="24.987739006s" podCreationTimestamp="2025-10-13 23:17:01 +0000 UTC" firstStartedPulling="2025-10-13 23:17:01.728019748 +0000 UTC m=+10.196346201" lastFinishedPulling="2025-10-13 23:17:23.499564069 +0000 UTC m=+31.967890522" observedRunningTime="2025-10-13 23:17:23.981258682 +0000 UTC m=+32.449585143" watchObservedRunningTime="2025-10-13 23:17:25.987739006 +0000 UTC m=+34.456065459"
	Oct 13 23:17:28 embed-certs-505482 kubelet[777]: I1013 23:17:28.974274     777 scope.go:117] "RemoveContainer" containerID="2db03c15b29f4470dc1af87e61bd98914b8a2d2e891887bb6cbb765ef7b8f52c"
	Oct 13 23:17:29 embed-certs-505482 kubelet[777]: I1013 23:17:29.569157     777 scope.go:117] "RemoveContainer" containerID="63b9c13139dbcd6c91af55e26b063d2ac5b5eae2e2e5be10588c9fe277923514"
	Oct 13 23:17:29 embed-certs-505482 kubelet[777]: E1013 23:17:29.569514     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:43 embed-certs-505482 kubelet[777]: I1013 23:17:43.761118     777 scope.go:117] "RemoveContainer" containerID="63b9c13139dbcd6c91af55e26b063d2ac5b5eae2e2e5be10588c9fe277923514"
	Oct 13 23:17:43 embed-certs-505482 kubelet[777]: E1013 23:17:43.761325     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5skjj_kubernetes-dashboard(9991f718-468d-48f5-a642-29cf6a876c11)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5skjj" podUID="9991f718-468d-48f5-a642-29cf6a876c11"
	Oct 13 23:17:50 embed-certs-505482 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:17:50 embed-certs-505482 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:17:50 embed-certs-505482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5f462a4795dc27d43d5a62445569013d3c16f0e890b67a12d67306948c7749d7] <==
	2025/10/13 23:17:23 Using namespace: kubernetes-dashboard
	2025/10/13 23:17:23 Using in-cluster config to connect to apiserver
	2025/10/13 23:17:23 Using secret token for csrf signing
	2025/10/13 23:17:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 23:17:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 23:17:23 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 23:17:23 Generating JWE encryption key
	2025/10/13 23:17:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 23:17:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 23:17:24 Initializing JWE encryption key from synchronized object
	2025/10/13 23:17:24 Creating in-cluster Sidecar client
	2025/10/13 23:17:24 Serving insecurely on HTTP port: 9090
	2025/10/13 23:17:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 23:17:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 23:17:23 Starting overwatch
	
	
	==> storage-provisioner [2db03c15b29f4470dc1af87e61bd98914b8a2d2e891887bb6cbb765ef7b8f52c] <==
	I1013 23:16:58.320932       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 23:17:28.322733       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a9da06268e166cc7bbb3283ba0c467ae5d738f271b8acf1a926511234fa8f03e] <==
	I1013 23:17:29.088558       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 23:17:29.088688       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 23:17:29.093196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:32.548823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:36.810037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:40.409059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:43.462683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:46.486417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:46.493912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:17:46.494216       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 23:17:46.495795       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-505482_f7998cc2-2c45-41e3-a83f-54dbd38fe360!
	I1013 23:17:46.495898       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"37da86f2-4daf-4130-84ca-e44ec1613cc8", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-505482_f7998cc2-2c45-41e3-a83f-54dbd38fe360 became leader
	W1013 23:17:46.510260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:46.522869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:17:46.602332       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-505482_f7998cc2-2c45-41e3-a83f-54dbd38fe360!
	W1013 23:17:48.527653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:48.535781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:50.545664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:50.556444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:52.560459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:52.576011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:54.579308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:54.590316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:56.595353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:17:56.602966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-505482 -n embed-certs-505482
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-505482 -n embed-certs-505482: exit status 2 (529.448244ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-505482 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-041709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-041709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (323.263997ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:18:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-041709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-041709
helpers_test.go:243: (dbg) docker inspect newest-cni-041709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd",
	        "Created": "2025-10-13T23:18:08.094436918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 632801,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:18:08.15973867Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/hosts",
	        "LogPath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd-json.log",
	        "Name": "/newest-cni-041709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-041709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-041709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd",
	                "LowerDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-041709",
	                "Source": "/var/lib/docker/volumes/newest-cni-041709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-041709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-041709",
	                "name.minikube.sigs.k8s.io": "newest-cni-041709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e8b7c88749db417fcdf75decc4218dcf84546a3dd4beac984482a43f2101d4f",
	            "SandboxKey": "/var/run/docker/netns/8e8b7c88749d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-041709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:fb:a9:98:30:bf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3df7c953cf9f4e0e97cdf9e47b4f15792247e0d1f7edb011f023caaa15ec476f",
	                    "EndpointID": "9979fb266996e096989217cece775d565181088b211e5d6addbfd323849658c9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-041709",
	                        "06492791cd8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-041709 -n newest-cni-041709
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-041709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-041709 logs -n 25: (1.089789938s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-670275                                                                                                                                                                                                                     │ old-k8s-version-670275       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:15 UTC │
	│ start   │ -p cert-expiration-896873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-896873       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ delete  │ -p cert-expiration-896873                                                                                                                                                                                                                     │ cert-expiration-896873       │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:14 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-985461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │                     │
	│ stop    │ -p no-preload-985461 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p no-preload-985461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	│ stop    │ -p embed-certs-505482 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-505482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:17 UTC │
	│ image   │ no-preload-985461 image list --format=json                                                                                                                                                                                                    │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p no-preload-985461 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p disable-driver-mounts-320520                                                                                                                                                                                                               │ disable-driver-mounts-320520 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ start   │ -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ image   │ embed-certs-505482 image list --format=json                                                                                                                                                                                                   │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p embed-certs-505482 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:18 UTC │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ start   │ -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ addons  │ enable metrics-server -p newest-cni-041709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:18:02
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:18:02.003595  632059 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:18:02.003818  632059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:18:02.003842  632059 out.go:374] Setting ErrFile to fd 2...
	I1013 23:18:02.003850  632059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:18:02.004267  632059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:18:02.004876  632059 out.go:368] Setting JSON to false
	I1013 23:18:02.006012  632059 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10818,"bootTime":1760386664,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:18:02.006100  632059 start.go:141] virtualization:  
	I1013 23:18:02.011955  632059 out.go:179] * [newest-cni-041709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:18:02.015212  632059 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:18:02.015249  632059 notify.go:220] Checking for updates...
	I1013 23:18:02.021540  632059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:18:02.024801  632059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:02.028847  632059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:18:02.031789  632059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:18:02.034699  632059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:18:02.038149  632059 config.go:182] Loaded profile config "default-k8s-diff-port-033746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:02.038315  632059 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:18:02.074018  632059 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:18:02.074160  632059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:18:02.135913  632059 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:18:02.124380026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:18:02.136021  632059 docker.go:318] overlay module found
	I1013 23:18:02.139245  632059 out.go:179] * Using the docker driver based on user configuration
	I1013 23:18:02.142213  632059 start.go:305] selected driver: docker
	I1013 23:18:02.142241  632059 start.go:925] validating driver "docker" against <nil>
	I1013 23:18:02.142256  632059 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:18:02.143013  632059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:18:02.223248  632059 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:18:02.211623164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:18:02.223412  632059 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1013 23:18:02.223437  632059 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1013 23:18:02.223665  632059 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 23:18:02.227178  632059 out.go:179] * Using Docker driver with root privileges
	I1013 23:18:02.229996  632059 cni.go:84] Creating CNI manager for ""
	I1013 23:18:02.230076  632059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:18:02.230092  632059 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 23:18:02.230176  632059 start.go:349] cluster config:
	{Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:18:02.233310  632059 out.go:179] * Starting "newest-cni-041709" primary control-plane node in "newest-cni-041709" cluster
	I1013 23:18:02.236176  632059 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:18:02.239387  632059 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:18:02.242202  632059 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:18:02.242263  632059 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:18:02.242276  632059 cache.go:58] Caching tarball of preloaded images
	I1013 23:18:02.242376  632059 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:18:02.242387  632059 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:18:02.242499  632059 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/config.json ...
	I1013 23:18:02.242516  632059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/config.json: {Name:mk020baa816a3f24d27cea9fe07e964abc0feaed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:02.242682  632059 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:18:02.271834  632059 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:18:02.271855  632059 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:18:02.271867  632059 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:18:02.271890  632059 start.go:360] acquireMachinesLock for newest-cni-041709: {Name:mk550fb39e8064c08d6ccaf342c21fc53a30808d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:18:02.271988  632059 start.go:364] duration metric: took 83.322µs to acquireMachinesLock for "newest-cni-041709"
	I1013 23:18:02.272014  632059 start.go:93] Provisioning new machine with config: &{Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:18:02.272083  632059 start.go:125] createHost starting for "" (driver="docker")
	I1013 23:17:59.670624  628422 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 23:17:59.675661  628422 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 23:17:59.675681  628422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 23:17:59.694458  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 23:17:59.996399  628422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 23:17:59.996558  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-033746 minikube.k8s.io/updated_at=2025_10_13T23_17_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=default-k8s-diff-port-033746 minikube.k8s.io/primary=true
	I1013 23:17:59.996561  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:00.017156  628422 ops.go:34] apiserver oom_adj: -16
	I1013 23:18:00.178218  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:00.678293  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:01.179124  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:01.682566  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:02.178541  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:02.682811  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:03.178378  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:03.678567  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:04.178311  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:04.678299  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:05.179000  628422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:05.563405  628422 kubeadm.go:1113] duration metric: took 5.566984545s to wait for elevateKubeSystemPrivileges
	I1013 23:18:05.563440  628422 kubeadm.go:402] duration metric: took 26.574415981s to StartCluster
	I1013 23:18:05.563457  628422 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:05.563520  628422 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:05.564155  628422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:05.564373  628422 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:18:05.564478  628422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 23:18:05.564717  628422 config.go:182] Loaded profile config "default-k8s-diff-port-033746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:05.564753  628422 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:18:05.564822  628422 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-033746"
	I1013 23:18:05.564839  628422 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-033746"
	I1013 23:18:05.564866  628422 host.go:66] Checking if "default-k8s-diff-port-033746" exists ...
	I1013 23:18:05.565337  628422 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:18:05.565943  628422 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-033746"
	I1013 23:18:05.565965  628422 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-033746"
	I1013 23:18:05.566241  628422 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:18:05.570497  628422 out.go:179] * Verifying Kubernetes components...
	I1013 23:18:05.577848  628422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:05.608177  628422 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-033746"
	I1013 23:18:05.608218  628422 host.go:66] Checking if "default-k8s-diff-port-033746" exists ...
	I1013 23:18:05.608649  628422 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:18:05.619489  628422 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:18:02.275500  632059 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 23:18:02.275735  632059 start.go:159] libmachine.API.Create for "newest-cni-041709" (driver="docker")
	I1013 23:18:02.275782  632059 client.go:168] LocalClient.Create starting
	I1013 23:18:02.275842  632059 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem
	I1013 23:18:02.275875  632059 main.go:141] libmachine: Decoding PEM data...
	I1013 23:18:02.275890  632059 main.go:141] libmachine: Parsing certificate...
	I1013 23:18:02.275947  632059 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem
	I1013 23:18:02.275964  632059 main.go:141] libmachine: Decoding PEM data...
	I1013 23:18:02.275974  632059 main.go:141] libmachine: Parsing certificate...
	I1013 23:18:02.276378  632059 cli_runner.go:164] Run: docker network inspect newest-cni-041709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 23:18:02.310287  632059 cli_runner.go:211] docker network inspect newest-cni-041709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 23:18:02.310376  632059 network_create.go:284] running [docker network inspect newest-cni-041709] to gather additional debugging logs...
	I1013 23:18:02.310393  632059 cli_runner.go:164] Run: docker network inspect newest-cni-041709
	W1013 23:18:02.346888  632059 cli_runner.go:211] docker network inspect newest-cni-041709 returned with exit code 1
	I1013 23:18:02.346916  632059 network_create.go:287] error running [docker network inspect newest-cni-041709]: docker network inspect newest-cni-041709: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-041709 not found
	I1013 23:18:02.346929  632059 network_create.go:289] output of [docker network inspect newest-cni-041709]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-041709 not found
	
	** /stderr **
	I1013 23:18:02.347020  632059 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:18:02.365919  632059 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-daf8f67114ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:2a:b3:49:6d:63} reservation:<nil>}
	I1013 23:18:02.366113  632059 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-57d99f1e9609 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:17:72:4c:c8:ba} reservation:<nil>}
	I1013 23:18:02.366360  632059 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-33ec4a6ec514 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:b6:7d:bc:fd} reservation:<nil>}
	I1013 23:18:02.366774  632059 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b3470}
	I1013 23:18:02.366792  632059 network_create.go:124] attempt to create docker network newest-cni-041709 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 23:18:02.366849  632059 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-041709 newest-cni-041709
	I1013 23:18:02.430452  632059 network_create.go:108] docker network newest-cni-041709 192.168.76.0/24 created
	I1013 23:18:02.430484  632059 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-041709" container
	I1013 23:18:02.430575  632059 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 23:18:02.447479  632059 cli_runner.go:164] Run: docker volume create newest-cni-041709 --label name.minikube.sigs.k8s.io=newest-cni-041709 --label created_by.minikube.sigs.k8s.io=true
	I1013 23:18:02.466021  632059 oci.go:103] Successfully created a docker volume newest-cni-041709
	I1013 23:18:02.466111  632059 cli_runner.go:164] Run: docker run --rm --name newest-cni-041709-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-041709 --entrypoint /usr/bin/test -v newest-cni-041709:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 23:18:03.039436  632059 oci.go:107] Successfully prepared a docker volume newest-cni-041709
	I1013 23:18:03.039496  632059 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:18:03.039517  632059 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 23:18:03.039602  632059 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-041709:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 23:18:05.622835  628422 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:18:05.622856  628422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:18:05.622920  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:18:05.655556  628422 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:18:05.655576  628422 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:18:05.655639  628422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:18:05.681420  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:18:05.700795  628422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:18:05.999828  628422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 23:18:06.135175  628422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:18:06.219668  628422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:18:06.278687  628422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:18:06.901265  628422 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1013 23:18:06.903873  628422 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-033746" to be "Ready" ...
	I1013 23:18:07.416628  628422 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-033746" context rescaled to 1 replicas
	I1013 23:18:07.434324  628422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.214571445s)
	I1013 23:18:07.434374  628422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.155619417s)
	I1013 23:18:07.496765  628422 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 23:18:07.526572  628422 addons.go:514] duration metric: took 1.961778264s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 23:18:08.015654  632059 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-041709:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.976009128s)
	I1013 23:18:08.015689  632059 kic.go:203] duration metric: took 4.976168115s to extract preloaded images to volume ...
	W1013 23:18:08.015849  632059 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 23:18:08.015971  632059 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 23:18:08.078192  632059 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-041709 --name newest-cni-041709 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-041709 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-041709 --network newest-cni-041709 --ip 192.168.76.2 --volume newest-cni-041709:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 23:18:08.394928  632059 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Running}}
	I1013 23:18:08.422746  632059 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:08.453769  632059 cli_runner.go:164] Run: docker exec newest-cni-041709 stat /var/lib/dpkg/alternatives/iptables
	I1013 23:18:08.506637  632059 oci.go:144] the created container "newest-cni-041709" has a running status.
	I1013 23:18:08.506673  632059 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa...
	I1013 23:18:09.362050  632059 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 23:18:09.386962  632059 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:09.410567  632059 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 23:18:09.410593  632059 kic_runner.go:114] Args: [docker exec --privileged newest-cni-041709 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 23:18:09.463377  632059 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:09.483058  632059 machine.go:93] provisionDockerMachine start ...
	I1013 23:18:09.483188  632059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:09.505695  632059 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:09.506053  632059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33479 <nil> <nil>}
	I1013 23:18:09.506070  632059 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:18:09.506681  632059 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38680->127.0.0.1:33479: read: connection reset by peer
	W1013 23:18:08.908126  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	W1013 23:18:11.407947  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	I1013 23:18:12.654773  632059 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-041709
	
	I1013 23:18:12.654799  632059 ubuntu.go:182] provisioning hostname "newest-cni-041709"
	I1013 23:18:12.654861  632059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:12.671191  632059 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:12.671508  632059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33479 <nil> <nil>}
	I1013 23:18:12.671528  632059 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-041709 && echo "newest-cni-041709" | sudo tee /etc/hostname
	I1013 23:18:12.833559  632059 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-041709
	
	I1013 23:18:12.833650  632059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:12.855478  632059 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:12.855802  632059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33479 <nil> <nil>}
	I1013 23:18:12.855825  632059 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-041709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-041709/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-041709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:18:13.015233  632059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:18:13.015264  632059 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:18:13.015283  632059 ubuntu.go:190] setting up certificates
	I1013 23:18:13.015296  632059 provision.go:84] configureAuth start
	I1013 23:18:13.015360  632059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:13.036721  632059 provision.go:143] copyHostCerts
	I1013 23:18:13.036800  632059 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:18:13.036815  632059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:18:13.036897  632059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:18:13.036999  632059 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:18:13.037009  632059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:18:13.037037  632059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:18:13.037097  632059 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:18:13.037110  632059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:18:13.037135  632059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:18:13.037189  632059 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.newest-cni-041709 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-041709]
	I1013 23:18:14.499537  632059 provision.go:177] copyRemoteCerts
	I1013 23:18:14.499610  632059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:18:14.499652  632059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:14.521869  632059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:14.622793  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:18:14.641402  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 23:18:14.661527  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:18:14.680265  632059 provision.go:87] duration metric: took 1.664944025s to configureAuth
	I1013 23:18:14.680294  632059 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:18:14.680494  632059 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:14.680619  632059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:14.704719  632059 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:14.705041  632059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33479 <nil> <nil>}
	I1013 23:18:14.705063  632059 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:18:14.968139  632059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:18:14.968223  632059 machine.go:96] duration metric: took 5.48511314s to provisionDockerMachine
	I1013 23:18:14.968247  632059 client.go:171] duration metric: took 12.692458165s to LocalClient.Create
	I1013 23:18:14.968300  632059 start.go:167] duration metric: took 12.692566249s to libmachine.API.Create "newest-cni-041709"
	I1013 23:18:14.968328  632059 start.go:293] postStartSetup for "newest-cni-041709" (driver="docker")
	I1013 23:18:14.968368  632059 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:18:14.968494  632059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:18:14.968571  632059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:14.987925  632059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:15.104260  632059 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:18:15.107996  632059 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:18:15.108026  632059 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:18:15.108038  632059 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:18:15.108094  632059 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:18:15.108181  632059 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:18:15.108291  632059 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:18:15.116182  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:18:15.136072  632059 start.go:296] duration metric: took 167.70166ms for postStartSetup
	I1013 23:18:15.136462  632059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:15.154468  632059 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/config.json ...
	I1013 23:18:15.154756  632059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:18:15.154797  632059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:15.173223  632059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:15.272029  632059 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:18:15.276734  632059 start.go:128] duration metric: took 13.004636169s to createHost
	I1013 23:18:15.276761  632059 start.go:83] releasing machines lock for "newest-cni-041709", held for 13.004763485s
	I1013 23:18:15.276855  632059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:15.293661  632059 ssh_runner.go:195] Run: cat /version.json
	I1013 23:18:15.293701  632059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:18:15.293712  632059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:15.293771  632059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:15.318140  632059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:15.328778  632059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:15.422667  632059 ssh_runner.go:195] Run: systemctl --version
	I1013 23:18:15.519750  632059 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:18:15.557491  632059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:18:15.562504  632059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:18:15.562593  632059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:18:15.593943  632059 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 23:18:15.594030  632059 start.go:495] detecting cgroup driver to use...
	I1013 23:18:15.594096  632059 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:18:15.594180  632059 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:18:15.612251  632059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:18:15.624834  632059 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:18:15.624951  632059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:18:15.642087  632059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:18:15.662614  632059 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:18:15.784230  632059 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:18:15.902182  632059 docker.go:234] disabling docker service ...
	I1013 23:18:15.902251  632059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:18:15.925907  632059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:18:15.939018  632059 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:18:16.064242  632059 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:18:16.185202  632059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:18:16.200295  632059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:18:16.216118  632059 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:18:16.216199  632059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:16.225915  632059 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:18:16.226033  632059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:16.235016  632059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:16.244388  632059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:16.253206  632059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:18:16.261447  632059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:16.270559  632059 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:16.284497  632059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:16.294168  632059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:18:16.301695  632059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:18:16.309361  632059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:16.419568  632059 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:18:16.537294  632059 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:18:16.537377  632059 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:18:16.541530  632059 start.go:563] Will wait 60s for crictl version
	I1013 23:18:16.541597  632059 ssh_runner.go:195] Run: which crictl
	I1013 23:18:16.545006  632059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:18:16.570277  632059 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:18:16.570369  632059 ssh_runner.go:195] Run: crio --version
	I1013 23:18:16.600572  632059 ssh_runner.go:195] Run: crio --version
	I1013 23:18:16.630243  632059 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:18:16.631287  632059 cli_runner.go:164] Run: docker network inspect newest-cni-041709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:18:16.646946  632059 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 23:18:16.651227  632059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:18:16.663166  632059 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1013 23:18:16.664430  632059 kubeadm.go:883] updating cluster {Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:18:16.664552  632059 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:18:16.664635  632059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:18:16.699032  632059 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:18:16.699053  632059 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:18:16.699143  632059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:18:16.730047  632059 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:18:16.730071  632059 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:18:16.730079  632059 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 23:18:16.730164  632059 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-041709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:18:16.730253  632059 ssh_runner.go:195] Run: crio config
	I1013 23:18:16.802626  632059 cni.go:84] Creating CNI manager for ""
	I1013 23:18:16.802694  632059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:18:16.802727  632059 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 23:18:16.802785  632059 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-041709 NodeName:newest-cni-041709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:18:16.802951  632059 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-041709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:18:16.803051  632059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:18:16.811617  632059 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:18:16.811741  632059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:18:16.819538  632059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 23:18:16.832895  632059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:18:16.846351  632059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1013 23:18:16.859714  632059 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:18:16.863682  632059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:18:16.873624  632059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:16.996711  632059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1013 23:18:13.907883  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	W1013 23:18:16.407825  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	I1013 23:18:17.016040  632059 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709 for IP: 192.168.76.2
	I1013 23:18:17.016076  632059 certs.go:195] generating shared ca certs ...
	I1013 23:18:17.016093  632059 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:17.016285  632059 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:18:17.016351  632059 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:18:17.016376  632059 certs.go:257] generating profile certs ...
	I1013 23:18:17.016465  632059 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/client.key
	I1013 23:18:17.016482  632059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/client.crt with IP's: []
	I1013 23:18:18.120699  632059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/client.crt ...
	I1013 23:18:18.120730  632059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/client.crt: {Name:mk29b520ccaa5bda339adef46e27916851f2ff7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:18.120930  632059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/client.key ...
	I1013 23:18:18.120948  632059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/client.key: {Name:mk18e75377826c767ebaaca0d177652141b329b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:18.121046  632059 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key.01857a96
	I1013 23:18:18.121064  632059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.crt.01857a96 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 23:18:18.427144  632059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.crt.01857a96 ...
	I1013 23:18:18.427175  632059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.crt.01857a96: {Name:mk5eae3928149081cbb78fb49b29ae442913679d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:18.427363  632059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key.01857a96 ...
	I1013 23:18:18.427377  632059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key.01857a96: {Name:mk735d665cc87a44269c5a2d7ff5f8f7cfc94b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:18.427462  632059 certs.go:382] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.crt.01857a96 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.crt
	I1013 23:18:18.427547  632059 certs.go:386] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key.01857a96 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key
	I1013 23:18:18.427609  632059 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.key
	I1013 23:18:18.427626  632059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.crt with IP's: []
	I1013 23:18:19.322297  632059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.crt ...
	I1013 23:18:19.322330  632059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.crt: {Name:mk46d7f7f7a02feab263f860cfeb930fbe7a943f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:19.322542  632059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.key ...
	I1013 23:18:19.322558  632059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.key: {Name:mk8996d55971f92502a9693e6b546f037c3eea5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:19.322769  632059 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:18:19.322814  632059 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:18:19.322823  632059 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:18:19.322859  632059 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:18:19.322886  632059 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:18:19.322914  632059 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:18:19.322960  632059 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:18:19.323591  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:18:19.347771  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:18:19.367478  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:18:19.388451  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:18:19.408706  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 23:18:19.428502  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 23:18:19.451653  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:18:19.469751  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 23:18:19.491054  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:18:19.510081  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:18:19.534193  632059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:18:19.552408  632059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:18:19.568469  632059 ssh_runner.go:195] Run: openssl version
	I1013 23:18:19.575536  632059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:18:19.584816  632059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:19.589130  632059 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:19.589198  632059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:19.630355  632059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:18:19.638909  632059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:18:19.647392  632059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:18:19.651307  632059 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:18:19.651417  632059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:18:19.693026  632059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:18:19.701401  632059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:18:19.710136  632059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:18:19.715055  632059 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:18:19.715154  632059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:18:19.757465  632059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:18:19.766126  632059 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:18:19.770006  632059 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 23:18:19.770079  632059 kubeadm.go:400] StartCluster: {Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:18:19.770186  632059 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:18:19.770243  632059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:18:19.801707  632059 cri.go:89] found id: ""
	I1013 23:18:19.801839  632059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:18:19.810003  632059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 23:18:19.818149  632059 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 23:18:19.818266  632059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 23:18:19.826265  632059 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 23:18:19.826284  632059 kubeadm.go:157] found existing configuration files:
	
	I1013 23:18:19.826341  632059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 23:18:19.834657  632059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 23:18:19.834749  632059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 23:18:19.842331  632059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 23:18:19.850390  632059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 23:18:19.850504  632059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 23:18:19.858450  632059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 23:18:19.866405  632059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 23:18:19.866488  632059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 23:18:19.874619  632059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 23:18:19.882489  632059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 23:18:19.882582  632059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 23:18:19.890312  632059 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 23:18:19.933918  632059 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 23:18:19.934159  632059 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 23:18:19.975321  632059 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 23:18:19.975397  632059 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 23:18:19.975440  632059 kubeadm.go:318] OS: Linux
	I1013 23:18:19.975492  632059 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 23:18:19.975547  632059 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 23:18:19.975603  632059 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 23:18:19.975675  632059 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 23:18:19.975730  632059 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 23:18:19.975783  632059 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 23:18:19.975834  632059 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 23:18:19.975889  632059 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 23:18:19.975941  632059 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 23:18:20.060440  632059 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 23:18:20.060569  632059 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 23:18:20.060670  632059 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 23:18:20.069232  632059 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 23:18:20.072753  632059 out.go:252]   - Generating certificates and keys ...
	I1013 23:18:20.072944  632059 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 23:18:20.073086  632059 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 23:18:20.422729  632059 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 23:18:20.903498  632059 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 23:18:21.094156  632059 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 23:18:21.365796  632059 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1013 23:18:18.908027  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	W1013 23:18:21.407174  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	I1013 23:18:22.596660  632059 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 23:18:22.597020  632059 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-041709] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 23:18:22.800755  632059 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 23:18:22.801115  632059 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-041709] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 23:18:23.147815  632059 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 23:18:23.467995  632059 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 23:18:23.891461  632059 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 23:18:23.892032  632059 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 23:18:24.467846  632059 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 23:18:25.052236  632059 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 23:18:25.484673  632059 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 23:18:25.821265  632059 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 23:18:26.467939  632059 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 23:18:26.468826  632059 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 23:18:26.471718  632059 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 23:18:26.474335  632059 out.go:252]   - Booting up control plane ...
	I1013 23:18:26.474439  632059 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 23:18:26.474524  632059 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 23:18:26.475580  632059 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 23:18:26.492436  632059 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 23:18:26.492852  632059 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 23:18:26.501856  632059 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 23:18:26.504841  632059 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 23:18:26.504898  632059 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 23:18:26.638191  632059 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 23:18:26.638312  632059 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1013 23:18:23.407288  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	W1013 23:18:25.408400  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	I1013 23:18:28.139379  632059 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501426754s
	I1013 23:18:28.142862  632059 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 23:18:28.142956  632059 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1013 23:18:28.143296  632059 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 23:18:28.143389  632059 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 23:18:31.020386  632059 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.876921866s
	W1013 23:18:27.908058  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	W1013 23:18:30.407324  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	W1013 23:18:32.407440  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	I1013 23:18:33.040686  632059 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.897723779s
	I1013 23:18:35.147832  632059 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.004623196s
	I1013 23:18:35.170321  632059 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 23:18:35.184269  632059 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 23:18:35.199654  632059 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 23:18:35.199853  632059 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-041709 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 23:18:35.212311  632059 kubeadm.go:318] [bootstrap-token] Using token: 1z6aib.zi5tlghngrn8aywc
	I1013 23:18:35.215222  632059 out.go:252]   - Configuring RBAC rules ...
	I1013 23:18:35.215359  632059 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 23:18:35.223425  632059 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 23:18:35.232622  632059 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 23:18:35.237931  632059 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 23:18:35.242571  632059 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 23:18:35.246754  632059 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 23:18:35.555857  632059 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 23:18:36.026734  632059 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 23:18:36.556523  632059 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 23:18:36.557733  632059 kubeadm.go:318] 
	I1013 23:18:36.557814  632059 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 23:18:36.557821  632059 kubeadm.go:318] 
	I1013 23:18:36.557898  632059 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 23:18:36.557902  632059 kubeadm.go:318] 
	I1013 23:18:36.557928  632059 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 23:18:36.557987  632059 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 23:18:36.558037  632059 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 23:18:36.558042  632059 kubeadm.go:318] 
	I1013 23:18:36.558100  632059 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 23:18:36.558107  632059 kubeadm.go:318] 
	I1013 23:18:36.558154  632059 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 23:18:36.558159  632059 kubeadm.go:318] 
	I1013 23:18:36.558211  632059 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 23:18:36.558286  632059 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 23:18:36.558354  632059 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 23:18:36.558359  632059 kubeadm.go:318] 
	I1013 23:18:36.558458  632059 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 23:18:36.558536  632059 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 23:18:36.558540  632059 kubeadm.go:318] 
	I1013 23:18:36.558624  632059 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1z6aib.zi5tlghngrn8aywc \
	I1013 23:18:36.558726  632059 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 \
	I1013 23:18:36.558746  632059 kubeadm.go:318] 	--control-plane 
	I1013 23:18:36.558751  632059 kubeadm.go:318] 
	I1013 23:18:36.558834  632059 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 23:18:36.558839  632059 kubeadm.go:318] 
	I1013 23:18:36.558920  632059 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1z6aib.zi5tlghngrn8aywc \
	I1013 23:18:36.559020  632059 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 
	I1013 23:18:36.562552  632059 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 23:18:36.562811  632059 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 23:18:36.562930  632059 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 23:18:36.562956  632059 cni.go:84] Creating CNI manager for ""
	I1013 23:18:36.562964  632059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:18:36.568107  632059 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 23:18:36.571045  632059 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 23:18:36.575379  632059 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 23:18:36.575403  632059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 23:18:36.592430  632059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 23:18:36.893127  632059 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 23:18:36.893227  632059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:36.893291  632059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-041709 minikube.k8s.io/updated_at=2025_10_13T23_18_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=newest-cni-041709 minikube.k8s.io/primary=true
	W1013 23:18:34.407767  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	W1013 23:18:36.908500  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	I1013 23:18:37.133120  632059 ops.go:34] apiserver oom_adj: -16
	I1013 23:18:37.133231  632059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:37.634069  632059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:38.133781  632059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:38.633688  632059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:39.133281  632059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:39.634182  632059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:40.133842  632059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:40.633609  632059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:18:40.756260  632059 kubeadm.go:1113] duration metric: took 3.863091335s to wait for elevateKubeSystemPrivileges
	I1013 23:18:40.756292  632059 kubeadm.go:402] duration metric: took 20.986235996s to StartCluster
	I1013 23:18:40.756309  632059 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:40.756396  632059 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:40.757423  632059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:40.757663  632059 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:18:40.757746  632059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 23:18:40.758029  632059 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:40.758141  632059 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:18:40.758204  632059 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-041709"
	I1013 23:18:40.758220  632059 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-041709"
	I1013 23:18:40.758242  632059 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:18:40.758765  632059 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:40.759119  632059 addons.go:69] Setting default-storageclass=true in profile "newest-cni-041709"
	I1013 23:18:40.759149  632059 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-041709"
	I1013 23:18:40.759457  632059 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:40.761121  632059 out.go:179] * Verifying Kubernetes components...
	I1013 23:18:40.765160  632059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:40.807413  632059 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:18:40.808603  632059 addons.go:238] Setting addon default-storageclass=true in "newest-cni-041709"
	I1013 23:18:40.808653  632059 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:18:40.809233  632059 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:40.814766  632059 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:18:40.814835  632059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:18:40.814947  632059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:40.857664  632059 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:18:40.857687  632059 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:18:40.857766  632059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:40.862529  632059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:40.886927  632059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:41.115285  632059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 23:18:41.115445  632059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:18:41.193173  632059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:18:41.264561  632059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:18:41.760025  632059 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:18:41.760099  632059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:18:41.760241  632059 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1013 23:18:42.145120  632059 api_server.go:72] duration metric: took 1.387426044s to wait for apiserver process to appear ...
	I1013 23:18:42.145215  632059 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:18:42.145257  632059 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:18:42.156343  632059 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 23:18:42.157759  632059 api_server.go:141] control plane version: v1.34.1
	I1013 23:18:42.157844  632059 api_server.go:131] duration metric: took 12.604171ms to wait for apiserver health ...
	I1013 23:18:42.157870  632059 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:18:42.173134  632059 system_pods.go:59] 8 kube-system pods found
	I1013 23:18:42.173234  632059 system_pods.go:61] "coredns-66bc5c9577-xj6dp" [f8aa8176-0559-438a-bb73-df95a9b5b826] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 23:18:42.173267  632059 system_pods.go:61] "etcd-newest-cni-041709" [2e1039ec-5511-4bc4-bb4e-331058716785] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:18:42.173311  632059 system_pods.go:61] "kindnet-x8mhj" [414b54bb-0026-41ac-96be-8dee1342b4eb] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1013 23:18:42.173344  632059 system_pods.go:61] "kube-apiserver-newest-cni-041709" [e9b71f4d-dcbb-41d1-a857-431101cc96c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:18:42.173386  632059 system_pods.go:61] "kube-controller-manager-newest-cni-041709" [1ccd495b-3870-48ce-8bc7-bc4fb413007f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:18:42.173424  632059 system_pods.go:61] "kube-proxy-9th9t" [36d1d7c2-c48c-4aeb-a4bc-86598239d36d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 23:18:42.173472  632059 system_pods.go:61] "kube-scheduler-newest-cni-041709" [d633d6be-b266-423e-b273-f756f05c08ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:18:42.173504  632059 system_pods.go:61] "storage-provisioner" [641ababe-c476-4464-889e-314716244888] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 23:18:42.173527  632059 system_pods.go:74] duration metric: took 15.635512ms to wait for pod list to return data ...
	I1013 23:18:42.173569  632059 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:18:42.190069  632059 default_sa.go:45] found service account: "default"
	I1013 23:18:42.190104  632059 default_sa.go:55] duration metric: took 16.514764ms for default service account to be created ...
	I1013 23:18:42.190120  632059 kubeadm.go:586] duration metric: took 1.43243039s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 23:18:42.190143  632059 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:18:42.190641  632059 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 23:18:42.194485  632059 addons.go:514] duration metric: took 1.436308901s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 23:18:42.201394  632059 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:18:42.201432  632059 node_conditions.go:123] node cpu capacity is 2
	I1013 23:18:42.201447  632059 node_conditions.go:105] duration metric: took 11.297876ms to run NodePressure ...
	I1013 23:18:42.201461  632059 start.go:241] waiting for startup goroutines ...
	I1013 23:18:42.276308  632059 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-041709" context rescaled to 1 replicas
	I1013 23:18:42.276342  632059 start.go:246] waiting for cluster config update ...
	I1013 23:18:42.276355  632059 start.go:255] writing updated cluster config ...
	I1013 23:18:42.276728  632059 ssh_runner.go:195] Run: rm -f paused
	I1013 23:18:42.385585  632059 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:18:42.388547  632059 out.go:179] * Done! kubectl is now configured to use "newest-cni-041709" cluster and "default" namespace by default
	W1013 23:18:38.909034  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	W1013 23:18:41.415367  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.380286871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.387958595Z" level=info msg="Running pod sandbox: kube-system/kindnet-x8mhj/POD" id=61ca6fc5-f1ab-4779-8b4f-328a3bf09859 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.388033851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.392971016Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=61ca6fc5-f1ab-4779-8b4f-328a3bf09859 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.39791646Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ccc086a7-dd7d-449d-9ab8-cb5e843dfba1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.42578079Z" level=info msg="Ran pod sandbox 4a265a1838ccf556a66a8c09d9cf95742e8fe97c58e34ca65564064592168494 with infra container: kube-system/kube-proxy-9th9t/POD" id=ccc086a7-dd7d-449d-9ab8-cb5e843dfba1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.426662266Z" level=info msg="Ran pod sandbox fea96696eda46c79a7bf4dbeccf7f637c93e5a52903e97f2e3eafdbe00585ad6 with infra container: kube-system/kindnet-x8mhj/POD" id=61ca6fc5-f1ab-4779-8b4f-328a3bf09859 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.429023713Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=910051b3-fe97-4701-91b5-98d30ef7ad2d name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.429776296Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=033f25c5-986f-4f82-a214-2bc10bc7ec01 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.43141116Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=45012e86-4e3a-4f0a-ac5f-5fdd590e0263 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.433596462Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8157a8c2-f649-408d-9a1c-cc43e407fd5a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.445718633Z" level=info msg="Creating container: kube-system/kindnet-x8mhj/kindnet-cni" id=0031a36e-07f0-46c5-adda-95412a0e081b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.446004519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.451106341Z" level=info msg="Creating container: kube-system/kube-proxy-9th9t/kube-proxy" id=13fa8697-024d-4be5-977c-52224ef4cdc3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.452333115Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.460802927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.461367716Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.463510738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.46401504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.509330231Z" level=info msg="Created container bbf88665c031900ba707ca3696072c00a7fd04e37dd34e0f43d75fcdccf3eab2: kube-system/kindnet-x8mhj/kindnet-cni" id=0031a36e-07f0-46c5-adda-95412a0e081b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.52825645Z" level=info msg="Created container 9642e2db806654ae4c03cd55db7da625e4c89a1d4e83ce9c832b98aecccd6b2f: kube-system/kube-proxy-9th9t/kube-proxy" id=13fa8697-024d-4be5-977c-52224ef4cdc3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.532058393Z" level=info msg="Starting container: bbf88665c031900ba707ca3696072c00a7fd04e37dd34e0f43d75fcdccf3eab2" id=d26fecf9-ec6b-45fd-bbb0-7745b7033133 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.532324874Z" level=info msg="Starting container: 9642e2db806654ae4c03cd55db7da625e4c89a1d4e83ce9c832b98aecccd6b2f" id=814a8548-ffef-44cd-9a7a-7822e0080d83 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.539758243Z" level=info msg="Started container" PID=1482 containerID=bbf88665c031900ba707ca3696072c00a7fd04e37dd34e0f43d75fcdccf3eab2 description=kube-system/kindnet-x8mhj/kindnet-cni id=d26fecf9-ec6b-45fd-bbb0-7745b7033133 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fea96696eda46c79a7bf4dbeccf7f637c93e5a52903e97f2e3eafdbe00585ad6
	Oct 13 23:18:41 newest-cni-041709 crio[839]: time="2025-10-13T23:18:41.540768455Z" level=info msg="Started container" PID=1483 containerID=9642e2db806654ae4c03cd55db7da625e4c89a1d4e83ce9c832b98aecccd6b2f description=kube-system/kube-proxy-9th9t/kube-proxy id=814a8548-ffef-44cd-9a7a-7822e0080d83 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a265a1838ccf556a66a8c09d9cf95742e8fe97c58e34ca65564064592168494
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bbf88665c0319       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   fea96696eda46       kindnet-x8mhj                               kube-system
	9642e2db80665       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   4a265a1838ccf       kube-proxy-9th9t                            kube-system
	a238c15e2389b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   92237a7bdda7b       kube-apiserver-newest-cni-041709            kube-system
	694d412d6c108       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   d1a9188e7a865       kube-controller-manager-newest-cni-041709   kube-system
	7cec75c0657f8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   1cdfb22237583       kube-scheduler-newest-cni-041709            kube-system
	e67a5ff027a84       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   74a1934fe5b04       etcd-newest-cni-041709                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-041709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-041709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=newest-cni-041709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_18_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:18:33 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-041709
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:18:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:18:36 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:18:36 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:18:36 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 13 Oct 2025 23:18:36 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-041709
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                0d9c1be1-5d17-406d-9cb7-8ce49d27cba4
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-041709                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7s
	  kube-system                 kindnet-x8mhj                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-041709             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-041709    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-9th9t                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-041709             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 16s)  kubelet          Node newest-cni-041709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 16s)  kubelet          Node newest-cni-041709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 16s)  kubelet          Node newest-cni-041709 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-041709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-041709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s                 kubelet          Node newest-cni-041709 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-041709 event: Registered Node newest-cni-041709 in Controller
	
	
	==> dmesg <==
	[Oct13 22:56] overlayfs: idmapped layers are currently not supported
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	[Oct13 23:16] overlayfs: idmapped layers are currently not supported
	[ +36.293322] overlayfs: idmapped layers are currently not supported
	[Oct13 23:17] overlayfs: idmapped layers are currently not supported
	[Oct13 23:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e67a5ff027a841d0133a6d4deee319183a7f47a0f86e4b47485424ef09801695] <==
	{"level":"warn","ts":"2025-10-13T23:18:31.743523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.767032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.777297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.794078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.809607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.827476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.841336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.859995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.872713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.907544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.923187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.938561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.954525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.969196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:31.986996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:32.011736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:32.020117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:32.035608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:32.055732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:32.072626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:32.095092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:32.117530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:32.132938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:32.150847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:32.215043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53876","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:18:43 up  3:00,  0 user,  load average: 3.28, 3.46, 2.81
	Linux newest-cni-041709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bbf88665c031900ba707ca3696072c00a7fd04e37dd34e0f43d75fcdccf3eab2] <==
	I1013 23:18:41.622501       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:18:41.622764       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 23:18:41.622880       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:18:41.622892       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:18:41.622901       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:18:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:18:41.904249       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:18:41.908467       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:18:41.908571       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:18:41.908767       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [a238c15e2389bf63d08928ae34e7e3481d2012014dcc74020f9475c4d14d1918] <==
	I1013 23:18:33.076175       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 23:18:33.076240       1 policy_source.go:240] refreshing policies
	I1013 23:18:33.078106       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 23:18:33.104828       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:18:33.159764       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:18:33.161671       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 23:18:33.169645       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:18:33.170361       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 23:18:33.813354       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 23:18:33.818727       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 23:18:33.818755       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:18:34.557613       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:18:34.605259       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:18:34.689542       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 23:18:34.704504       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1013 23:18:34.706066       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:18:34.713106       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 23:18:34.988461       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:18:35.997270       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:18:36.025461       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 23:18:36.045541       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 23:18:40.860491       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:18:40.934429       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 23:18:41.049741       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:18:41.063341       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [694d412d6c1089686d07c9d5218ce49406590c2bad842aab273d0b4c09877a87] <==
	I1013 23:18:40.041352       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 23:18:40.041480       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:18:40.041535       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 23:18:40.041784       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 23:18:40.041908       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 23:18:40.042324       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 23:18:40.043625       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 23:18:40.043702       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 23:18:40.043844       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 23:18:40.046980       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 23:18:40.047002       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 23:18:40.047015       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:18:40.047026       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:18:40.047038       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:18:40.047047       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 23:18:40.047061       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 23:18:40.049178       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 23:18:40.049248       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 23:18:40.049279       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 23:18:40.049295       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 23:18:40.049301       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 23:18:40.054577       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 23:18:40.054662       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 23:18:40.062623       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-041709" podCIDRs=["10.42.0.0/24"]
	I1013 23:18:40.069857       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-proxy [9642e2db806654ae4c03cd55db7da625e4c89a1d4e83ce9c832b98aecccd6b2f] <==
	I1013 23:18:41.684151       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:18:41.802505       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:18:41.909205       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:18:41.909363       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 23:18:41.909523       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:18:41.981457       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:18:41.981576       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:18:41.991385       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:18:41.991716       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:18:41.991733       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:18:42.005888       1 config.go:200] "Starting service config controller"
	I1013 23:18:42.006383       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:18:42.007223       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:18:42.007245       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:18:42.010192       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:18:42.010225       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:18:42.011035       1 config.go:309] "Starting node config controller"
	I1013 23:18:42.011047       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:18:42.011054       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:18:42.107267       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:18:42.107352       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 23:18:42.111116       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7cec75c0657f8aeeafb28079be540c90d0952d1f7d558d8dfdff450ee64385e6] <==
	E1013 23:18:33.054276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 23:18:33.054334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 23:18:33.054480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 23:18:33.054507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 23:18:33.054550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 23:18:33.054606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 23:18:33.054645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 23:18:33.054766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 23:18:33.874304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 23:18:33.879463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 23:18:33.921499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 23:18:33.985231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 23:18:33.995824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 23:18:34.018069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1013 23:18:34.023281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 23:18:34.068599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 23:18:34.071851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 23:18:34.101796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 23:18:34.119256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 23:18:34.170933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 23:18:34.196767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 23:18:34.208628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 23:18:34.224626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 23:18:34.287449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1013 23:18:36.590913       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:18:36 newest-cni-041709 kubelet[1301]: I1013 23:18:36.924368    1301 apiserver.go:52] "Watching apiserver"
	Oct 13 23:18:36 newest-cni-041709 kubelet[1301]: I1013 23:18:36.957205    1301 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 13 23:18:37 newest-cni-041709 kubelet[1301]: I1013 23:18:37.086566    1301 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-041709"
	Oct 13 23:18:37 newest-cni-041709 kubelet[1301]: I1013 23:18:37.086816    1301 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-041709"
	Oct 13 23:18:37 newest-cni-041709 kubelet[1301]: I1013 23:18:37.108073    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-041709" podStartSLOduration=1.108038604 podStartE2EDuration="1.108038604s" podCreationTimestamp="2025-10-13 23:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:37.107999556 +0000 UTC m=+1.276057106" watchObservedRunningTime="2025-10-13 23:18:37.108038604 +0000 UTC m=+1.276096137"
	Oct 13 23:18:37 newest-cni-041709 kubelet[1301]: E1013 23:18:37.153313    1301 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-041709\" already exists" pod="kube-system/kube-controller-manager-newest-cni-041709"
	Oct 13 23:18:37 newest-cni-041709 kubelet[1301]: E1013 23:18:37.153782    1301 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-041709\" already exists" pod="kube-system/etcd-newest-cni-041709"
	Oct 13 23:18:37 newest-cni-041709 kubelet[1301]: I1013 23:18:37.164222    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-041709" podStartSLOduration=1.164194317 podStartE2EDuration="1.164194317s" podCreationTimestamp="2025-10-13 23:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:37.163844572 +0000 UTC m=+1.331902113" watchObservedRunningTime="2025-10-13 23:18:37.164194317 +0000 UTC m=+1.332251858"
	Oct 13 23:18:37 newest-cni-041709 kubelet[1301]: I1013 23:18:37.184130    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-041709" podStartSLOduration=1.184100513 podStartE2EDuration="1.184100513s" podCreationTimestamp="2025-10-13 23:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:37.180930115 +0000 UTC m=+1.348987656" watchObservedRunningTime="2025-10-13 23:18:37.184100513 +0000 UTC m=+1.352158054"
	Oct 13 23:18:37 newest-cni-041709 kubelet[1301]: I1013 23:18:37.201809    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-041709" podStartSLOduration=1.20179136 podStartE2EDuration="1.20179136s" podCreationTimestamp="2025-10-13 23:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:37.201703912 +0000 UTC m=+1.369761453" watchObservedRunningTime="2025-10-13 23:18:37.20179136 +0000 UTC m=+1.369848909"
	Oct 13 23:18:40 newest-cni-041709 kubelet[1301]: I1013 23:18:40.152220    1301 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 13 23:18:40 newest-cni-041709 kubelet[1301]: I1013 23:18:40.153206    1301 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 13 23:18:41 newest-cni-041709 kubelet[1301]: I1013 23:18:41.201569    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/414b54bb-0026-41ac-96be-8dee1342b4eb-xtables-lock\") pod \"kindnet-x8mhj\" (UID: \"414b54bb-0026-41ac-96be-8dee1342b4eb\") " pod="kube-system/kindnet-x8mhj"
	Oct 13 23:18:41 newest-cni-041709 kubelet[1301]: I1013 23:18:41.201664    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/414b54bb-0026-41ac-96be-8dee1342b4eb-lib-modules\") pod \"kindnet-x8mhj\" (UID: \"414b54bb-0026-41ac-96be-8dee1342b4eb\") " pod="kube-system/kindnet-x8mhj"
	Oct 13 23:18:41 newest-cni-041709 kubelet[1301]: I1013 23:18:41.201707    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36d1d7c2-c48c-4aeb-a4bc-86598239d36d-xtables-lock\") pod \"kube-proxy-9th9t\" (UID: \"36d1d7c2-c48c-4aeb-a4bc-86598239d36d\") " pod="kube-system/kube-proxy-9th9t"
	Oct 13 23:18:41 newest-cni-041709 kubelet[1301]: I1013 23:18:41.201725    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnl7f\" (UniqueName: \"kubernetes.io/projected/36d1d7c2-c48c-4aeb-a4bc-86598239d36d-kube-api-access-xnl7f\") pod \"kube-proxy-9th9t\" (UID: \"36d1d7c2-c48c-4aeb-a4bc-86598239d36d\") " pod="kube-system/kube-proxy-9th9t"
	Oct 13 23:18:41 newest-cni-041709 kubelet[1301]: I1013 23:18:41.201768    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfk86\" (UniqueName: \"kubernetes.io/projected/414b54bb-0026-41ac-96be-8dee1342b4eb-kube-api-access-tfk86\") pod \"kindnet-x8mhj\" (UID: \"414b54bb-0026-41ac-96be-8dee1342b4eb\") " pod="kube-system/kindnet-x8mhj"
	Oct 13 23:18:41 newest-cni-041709 kubelet[1301]: I1013 23:18:41.201788    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36d1d7c2-c48c-4aeb-a4bc-86598239d36d-kube-proxy\") pod \"kube-proxy-9th9t\" (UID: \"36d1d7c2-c48c-4aeb-a4bc-86598239d36d\") " pod="kube-system/kube-proxy-9th9t"
	Oct 13 23:18:41 newest-cni-041709 kubelet[1301]: I1013 23:18:41.201804    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36d1d7c2-c48c-4aeb-a4bc-86598239d36d-lib-modules\") pod \"kube-proxy-9th9t\" (UID: \"36d1d7c2-c48c-4aeb-a4bc-86598239d36d\") " pod="kube-system/kube-proxy-9th9t"
	Oct 13 23:18:41 newest-cni-041709 kubelet[1301]: I1013 23:18:41.201958    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/414b54bb-0026-41ac-96be-8dee1342b4eb-cni-cfg\") pod \"kindnet-x8mhj\" (UID: \"414b54bb-0026-41ac-96be-8dee1342b4eb\") " pod="kube-system/kindnet-x8mhj"
	Oct 13 23:18:41 newest-cni-041709 kubelet[1301]: I1013 23:18:41.330848    1301 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 23:18:41 newest-cni-041709 kubelet[1301]: W1013 23:18:41.425078    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/crio-fea96696eda46c79a7bf4dbeccf7f637c93e5a52903e97f2e3eafdbe00585ad6 WatchSource:0}: Error finding container fea96696eda46c79a7bf4dbeccf7f637c93e5a52903e97f2e3eafdbe00585ad6: Status 404 returned error can't find the container with id fea96696eda46c79a7bf4dbeccf7f637c93e5a52903e97f2e3eafdbe00585ad6
	Oct 13 23:18:41 newest-cni-041709 kubelet[1301]: W1013 23:18:41.425526    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/crio-4a265a1838ccf556a66a8c09d9cf95742e8fe97c58e34ca65564064592168494 WatchSource:0}: Error finding container 4a265a1838ccf556a66a8c09d9cf95742e8fe97c58e34ca65564064592168494: Status 404 returned error can't find the container with id 4a265a1838ccf556a66a8c09d9cf95742e8fe97c58e34ca65564064592168494
	Oct 13 23:18:42 newest-cni-041709 kubelet[1301]: I1013 23:18:42.214887    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9th9t" podStartSLOduration=2.214867206 podStartE2EDuration="2.214867206s" podCreationTimestamp="2025-10-13 23:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:42.180718225 +0000 UTC m=+6.348775766" watchObservedRunningTime="2025-10-13 23:18:42.214867206 +0000 UTC m=+6.382924739"
	Oct 13 23:18:42 newest-cni-041709 kubelet[1301]: I1013 23:18:42.271550    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-x8mhj" podStartSLOduration=2.271529551 podStartE2EDuration="2.271529551s" podCreationTimestamp="2025-10-13 23:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:42.216350089 +0000 UTC m=+6.384407655" watchObservedRunningTime="2025-10-13 23:18:42.271529551 +0000 UTC m=+6.439587084"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-041709 -n newest-cni-041709
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-041709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xj6dp storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-041709 describe pod coredns-66bc5c9577-xj6dp storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-041709 describe pod coredns-66bc5c9577-xj6dp storage-provisioner: exit status 1 (90.72935ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xj6dp" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-041709 describe pod coredns-66bc5c9577-xj6dp storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-033746 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-033746 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (390.539735ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:18:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-033746 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-033746 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-033746 describe deploy/metrics-server -n kube-system: exit status 1 (132.646064ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-033746 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-033746
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-033746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090",
	        "Created": "2025-10-13T23:17:28.705422027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 628862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:17:28.790967463Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/hostname",
	        "HostsPath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/hosts",
	        "LogPath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090-json.log",
	        "Name": "/default-k8s-diff-port-033746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-033746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-033746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090",
	                "LowerDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-033746",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-033746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-033746",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-033746",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-033746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0fa532ac86e539e05c21cbd83b6a2ae8ac0a079db4cc7de9bb80715de479f753",
	            "SandboxKey": "/var/run/docker/netns/0fa532ac86e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-033746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:e3:63:39:f8:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8549f6a07be41a945dcb145bb71d1b75a39e75ddc68f75d19380e8800e056e42",
	                    "EndpointID": "9f4bb2d87403da6f3380cfb9fa6847c5614c889bfb2ee80c72b5e1aece66277e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-033746",
	                        "278dbdd59e84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-033746 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-033746 logs -n 25: (1.922634985s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:14 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-985461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │                     │
	│ stop    │ -p no-preload-985461 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:15 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p no-preload-985461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	│ stop    │ -p embed-certs-505482 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-505482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:17 UTC │
	│ image   │ no-preload-985461 image list --format=json                                                                                                                                                                                                    │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p no-preload-985461 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p disable-driver-mounts-320520                                                                                                                                                                                                               │ disable-driver-mounts-320520 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ start   │ -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:18 UTC │
	│ image   │ embed-certs-505482 image list --format=json                                                                                                                                                                                                   │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p embed-certs-505482 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:18 UTC │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ start   │ -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ addons  │ enable metrics-server -p newest-cni-041709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	│ stop    │ -p newest-cni-041709 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-041709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ start   │ -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-033746 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:18:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:18:46.659625  635465 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:18:46.659759  635465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:18:46.659771  635465 out.go:374] Setting ErrFile to fd 2...
	I1013 23:18:46.659778  635465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:18:46.660033  635465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:18:46.660394  635465 out.go:368] Setting JSON to false
	I1013 23:18:46.661326  635465 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10863,"bootTime":1760386664,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:18:46.661395  635465 start.go:141] virtualization:  
	I1013 23:18:46.665006  635465 out.go:179] * [newest-cni-041709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:18:46.668932  635465 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:18:46.668996  635465 notify.go:220] Checking for updates...
	I1013 23:18:46.674873  635465 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:18:46.677956  635465 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:46.680967  635465 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:18:46.684029  635465 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:18:46.686838  635465 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:18:46.690305  635465 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:46.690950  635465 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:18:46.730001  635465 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:18:46.730151  635465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:18:46.812994  635465 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:18:46.793285508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:18:46.813104  635465 docker.go:318] overlay module found
	I1013 23:18:46.817039  635465 out.go:179] * Using the docker driver based on existing profile
	I1013 23:18:46.820013  635465 start.go:305] selected driver: docker
	I1013 23:18:46.820036  635465 start.go:925] validating driver "docker" against &{Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:18:46.820133  635465 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:18:46.820838  635465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:18:46.888064  635465 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:18:46.878929832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:18:46.888406  635465 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 23:18:46.888461  635465 cni.go:84] Creating CNI manager for ""
	I1013 23:18:46.888520  635465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:18:46.888562  635465 start.go:349] cluster config:
	{Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:18:46.892201  635465 out.go:179] * Starting "newest-cni-041709" primary control-plane node in "newest-cni-041709" cluster
	I1013 23:18:46.895353  635465 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:18:46.898524  635465 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:18:46.901626  635465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:18:46.901707  635465 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:18:46.901713  635465 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:18:46.901723  635465 cache.go:58] Caching tarball of preloaded images
	I1013 23:18:46.901838  635465 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:18:46.901850  635465 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:18:46.901994  635465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/config.json ...
	I1013 23:18:46.932671  635465 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:18:46.932697  635465 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:18:46.932710  635465 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:18:46.932738  635465 start.go:360] acquireMachinesLock for newest-cni-041709: {Name:mk550fb39e8064c08d6ccaf342c21fc53a30808d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:18:46.932799  635465 start.go:364] duration metric: took 35.913µs to acquireMachinesLock for "newest-cni-041709"
	I1013 23:18:46.932823  635465 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:18:46.932842  635465 fix.go:54] fixHost starting: 
	I1013 23:18:46.933108  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:46.956430  635465 fix.go:112] recreateIfNeeded on newest-cni-041709: state=Stopped err=<nil>
	W1013 23:18:46.956463  635465 fix.go:138] unexpected machine state, will restart: <nil>
	W1013 23:18:43.907614  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	W1013 23:18:46.408020  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	I1013 23:18:46.908290  628422 node_ready.go:49] node "default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:46.908317  628422 node_ready.go:38] duration metric: took 40.004370462s for node "default-k8s-diff-port-033746" to be "Ready" ...
	I1013 23:18:46.908330  628422 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:18:46.908478  628422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:18:46.923250  628422 api_server.go:72] duration metric: took 41.358831182s to wait for apiserver process to appear ...
	I1013 23:18:46.923281  628422 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:18:46.923305  628422 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1013 23:18:46.935604  628422 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1013 23:18:46.936939  628422 api_server.go:141] control plane version: v1.34.1
	I1013 23:18:46.936961  628422 api_server.go:131] duration metric: took 13.673442ms to wait for apiserver health ...
	I1013 23:18:46.936970  628422 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:18:46.941226  628422 system_pods.go:59] 8 kube-system pods found
	I1013 23:18:46.941259  628422 system_pods.go:61] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:46.941266  628422 system_pods.go:61] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:46.941273  628422 system_pods.go:61] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:46.941278  628422 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:46.941283  628422 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:46.941287  628422 system_pods.go:61] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:46.941292  628422 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:46.941297  628422 system_pods.go:61] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:46.941305  628422 system_pods.go:74] duration metric: took 4.329029ms to wait for pod list to return data ...
	I1013 23:18:46.941312  628422 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:18:46.944488  628422 default_sa.go:45] found service account: "default"
	I1013 23:18:46.944516  628422 default_sa.go:55] duration metric: took 3.197368ms for default service account to be created ...
	I1013 23:18:46.944526  628422 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:18:46.950031  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:46.950073  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:46.950081  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:46.950087  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:46.950092  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:46.950097  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:46.950101  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:46.950106  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:46.950112  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:46.950139  628422 retry.go:31] will retry after 198.826906ms: missing components: kube-dns
	I1013 23:18:47.153958  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:47.153990  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:47.153998  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:47.154004  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:47.154008  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:47.154012  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:47.154017  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:47.154020  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:47.154026  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:47.154041  628422 retry.go:31] will retry after 287.091453ms: missing components: kube-dns
	I1013 23:18:47.445492  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:47.445522  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:47.445530  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:47.445537  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:47.445542  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:47.445546  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:47.445550  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:47.445554  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:47.445560  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:47.445574  628422 retry.go:31] will retry after 372.489262ms: missing components: kube-dns
	I1013 23:18:47.822900  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:47.822989  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Running
	I1013 23:18:47.823015  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:47.823037  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:47.823064  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:47.823169  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:47.823209  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:47.823238  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:47.823260  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Running
	I1013 23:18:47.823291  628422 system_pods.go:126] duration metric: took 878.758193ms to wait for k8s-apps to be running ...
	I1013 23:18:47.823314  628422 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:18:47.823387  628422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:18:47.840918  628422 system_svc.go:56] duration metric: took 17.596072ms WaitForService to wait for kubelet
	I1013 23:18:47.840951  628422 kubeadm.go:586] duration metric: took 42.276544463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:18:47.840971  628422 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:18:47.845175  628422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:18:47.845214  628422 node_conditions.go:123] node cpu capacity is 2
	I1013 23:18:47.845227  628422 node_conditions.go:105] duration metric: took 4.251164ms to run NodePressure ...
	I1013 23:18:47.845240  628422 start.go:241] waiting for startup goroutines ...
	I1013 23:18:47.845248  628422 start.go:246] waiting for cluster config update ...
	I1013 23:18:47.845259  628422 start.go:255] writing updated cluster config ...
	I1013 23:18:47.845569  628422 ssh_runner.go:195] Run: rm -f paused
	I1013 23:18:47.851978  628422 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:18:47.855576  628422 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qf4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.860797  628422 pod_ready.go:94] pod "coredns-66bc5c9577-qf4lq" is "Ready"
	I1013 23:18:47.860825  628422 pod_ready.go:86] duration metric: took 5.221377ms for pod "coredns-66bc5c9577-qf4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.863794  628422 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.869540  628422 pod_ready.go:94] pod "etcd-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:47.869581  628422 pod_ready.go:86] duration metric: took 5.758982ms for pod "etcd-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.872429  628422 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.877409  628422 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:47.877436  628422 pod_ready.go:86] duration metric: took 4.943697ms for pod "kube-apiserver-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.880160  628422 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:48.256569  628422 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:48.256600  628422 pod_ready.go:86] duration metric: took 376.414834ms for pod "kube-controller-manager-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:48.455997  628422 pod_ready.go:83] waiting for pod "kube-proxy-mxnv7" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:48.857167  628422 pod_ready.go:94] pod "kube-proxy-mxnv7" is "Ready"
	I1013 23:18:48.857199  628422 pod_ready.go:86] duration metric: took 401.173799ms for pod "kube-proxy-mxnv7" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:49.056104  628422 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:49.456155  628422 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:49.456184  628422 pod_ready.go:86] duration metric: took 400.055996ms for pod "kube-scheduler-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:49.456198  628422 pod_ready.go:40] duration metric: took 1.604180795s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:18:49.517347  628422 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:18:49.520660  628422 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-033746" cluster and "default" namespace by default
	I1013 23:18:46.959948  635465 out.go:252] * Restarting existing docker container for "newest-cni-041709" ...
	I1013 23:18:46.960041  635465 cli_runner.go:164] Run: docker start newest-cni-041709
	I1013 23:18:47.272614  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:47.301351  635465 kic.go:430] container "newest-cni-041709" state is running.
	I1013 23:18:47.302387  635465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:47.336818  635465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/config.json ...
	I1013 23:18:47.337080  635465 machine.go:93] provisionDockerMachine start ...
	I1013 23:18:47.337156  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:47.361341  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:47.361673  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:47.361689  635465 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:18:47.362232  635465 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54710->127.0.0.1:33484: read: connection reset by peer
	I1013 23:18:50.506943  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-041709
	
	I1013 23:18:50.506971  635465 ubuntu.go:182] provisioning hostname "newest-cni-041709"
	I1013 23:18:50.507038  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:50.529919  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:50.530245  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:50.530263  635465 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-041709 && echo "newest-cni-041709" | sudo tee /etc/hostname
	I1013 23:18:50.684921  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-041709
	
	I1013 23:18:50.685001  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:50.704082  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:50.704403  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:50.704447  635465 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-041709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-041709/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-041709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:18:50.859361  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:18:50.859386  635465 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:18:50.859412  635465 ubuntu.go:190] setting up certificates
	I1013 23:18:50.859421  635465 provision.go:84] configureAuth start
	I1013 23:18:50.859479  635465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:50.876142  635465 provision.go:143] copyHostCerts
	I1013 23:18:50.876213  635465 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:18:50.876241  635465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:18:50.876320  635465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:18:50.876465  635465 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:18:50.876477  635465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:18:50.876508  635465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:18:50.876577  635465 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:18:50.876587  635465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:18:50.876613  635465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:18:50.876675  635465 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.newest-cni-041709 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-041709]
	I1013 23:18:51.531424  635465 provision.go:177] copyRemoteCerts
	I1013 23:18:51.531540  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:18:51.531626  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:51.550654  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:51.658836  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:18:51.677867  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 23:18:51.695565  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 23:18:51.718317  635465 provision.go:87] duration metric: took 858.871562ms to configureAuth
	I1013 23:18:51.718407  635465 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:18:51.718639  635465 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:51.718821  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:51.739263  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:51.739570  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:51.739597  635465 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:18:52.103542  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:18:52.103629  635465 machine.go:96] duration metric: took 4.766529819s to provisionDockerMachine
	I1013 23:18:52.103655  635465 start.go:293] postStartSetup for "newest-cni-041709" (driver="docker")
	I1013 23:18:52.103680  635465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:18:52.103769  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:18:52.103834  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.134085  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.243909  635465 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:18:52.248277  635465 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:18:52.248316  635465 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:18:52.248328  635465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:18:52.248403  635465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:18:52.248560  635465 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:18:52.248714  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:18:52.265882  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:18:52.295454  635465 start.go:296] duration metric: took 191.769349ms for postStartSetup
	I1013 23:18:52.295552  635465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:18:52.295635  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.314699  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.416219  635465 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:18:52.421219  635465 fix.go:56] duration metric: took 5.488375753s for fixHost
	I1013 23:18:52.421253  635465 start.go:83] releasing machines lock for "newest-cni-041709", held for 5.488442081s
	I1013 23:18:52.421386  635465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:52.447747  635465 ssh_runner.go:195] Run: cat /version.json
	I1013 23:18:52.447805  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.447830  635465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:18:52.447892  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.472897  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.484151  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.583128  635465 ssh_runner.go:195] Run: systemctl --version
	I1013 23:18:52.675301  635465 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:18:52.730783  635465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:18:52.736216  635465 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:18:52.736295  635465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:18:52.744350  635465 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:18:52.744460  635465 start.go:495] detecting cgroup driver to use...
	I1013 23:18:52.744522  635465 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:18:52.744593  635465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:18:52.760915  635465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:18:52.775406  635465 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:18:52.775473  635465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:18:52.791809  635465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:18:52.805702  635465 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:18:52.932808  635465 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:18:53.078511  635465 docker.go:234] disabling docker service ...
	I1013 23:18:53.078575  635465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:18:53.096051  635465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:18:53.111550  635465 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:18:53.239198  635465 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:18:53.365077  635465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:18:53.379354  635465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:18:53.393225  635465 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:18:53.393321  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.402665  635465 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:18:53.402754  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.412000  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.421032  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.430178  635465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:18:53.446415  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.461458  635465 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.471380  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.480834  635465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:18:53.488886  635465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:18:53.496978  635465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:53.632607  635465 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:18:53.798233  635465 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:18:53.798351  635465 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:18:53.802403  635465 start.go:563] Will wait 60s for crictl version
	I1013 23:18:53.802545  635465 ssh_runner.go:195] Run: which crictl
	I1013 23:18:53.806255  635465 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:18:53.831293  635465 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:18:53.831455  635465 ssh_runner.go:195] Run: crio --version
	I1013 23:18:53.861533  635465 ssh_runner.go:195] Run: crio --version
	I1013 23:18:53.892208  635465 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:18:53.894956  635465 cli_runner.go:164] Run: docker network inspect newest-cni-041709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:18:53.910736  635465 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 23:18:53.914876  635465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:18:53.929173  635465 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1013 23:18:53.931924  635465 kubeadm.go:883] updating cluster {Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:18:53.932080  635465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:18:53.932170  635465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:18:53.968015  635465 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:18:53.968040  635465 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:18:53.968096  635465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:18:53.995316  635465 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:18:53.995344  635465 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:18:53.995353  635465 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 23:18:53.995455  635465 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-041709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:18:53.995543  635465 ssh_runner.go:195] Run: crio config
	I1013 23:18:54.083016  635465 cni.go:84] Creating CNI manager for ""
	I1013 23:18:54.083173  635465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:18:54.083216  635465 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 23:18:54.083275  635465 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-041709 NodeName:newest-cni-041709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:18:54.083418  635465 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-041709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:18:54.083495  635465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:18:54.092035  635465 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:18:54.092108  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:18:54.100446  635465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 23:18:54.113935  635465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:18:54.128703  635465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1013 23:18:54.145028  635465 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:18:54.149031  635465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:18:54.159321  635465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:54.292995  635465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:18:54.310744  635465 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709 for IP: 192.168.76.2
	I1013 23:18:54.310802  635465 certs.go:195] generating shared ca certs ...
	I1013 23:18:54.310842  635465 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:54.311021  635465 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:18:54.311158  635465 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:18:54.311211  635465 certs.go:257] generating profile certs ...
	I1013 23:18:54.311334  635465 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/client.key
	I1013 23:18:54.311450  635465 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key.01857a96
	I1013 23:18:54.311534  635465 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.key
	I1013 23:18:54.311673  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:18:54.311741  635465 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:18:54.311778  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:18:54.311831  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:18:54.311886  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:18:54.311951  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:18:54.312039  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:18:54.312871  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:18:54.332293  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:18:54.350249  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:18:54.367756  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:18:54.385471  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 23:18:54.403134  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 23:18:54.427326  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:18:54.452916  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 23:18:54.475220  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:18:54.503057  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:18:54.528203  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:18:54.549801  635465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:18:54.563631  635465 ssh_runner.go:195] Run: openssl version
	I1013 23:18:54.570495  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:18:54.579237  635465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:18:54.583560  635465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:18:54.583693  635465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:18:54.629672  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:18:54.637997  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:18:54.646667  635465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:54.651345  635465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:54.651410  635465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:54.694293  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:18:54.703284  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:18:54.712284  635465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:18:54.717383  635465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:18:54.717493  635465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:18:54.759219  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:18:54.767198  635465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:18:54.771070  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:18:54.812150  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:18:54.853770  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:18:54.895147  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:18:54.939997  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:18:54.993659  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:18:55.053286  635465 kubeadm.go:400] StartCluster: {Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:18:55.053441  635465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:18:55.053552  635465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:18:55.139374  635465 cri.go:89] found id: "44d53fd79812b8d922ad72b9ac9d25226207caf5d09d5678a9138d64ac33674c"
	I1013 23:18:55.139447  635465 cri.go:89] found id: "d86542dd3227b9d6eee466227d0a22afeced1086ec52a4e64f20f7da5d9ce81e"
	I1013 23:18:55.139465  635465 cri.go:89] found id: ""
	I1013 23:18:55.139546  635465 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 23:18:55.167760  635465 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:18:55Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:18:55.167914  635465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:18:55.188111  635465 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:18:55.188180  635465 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:18:55.188260  635465 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:18:55.208830  635465 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:18:55.209520  635465 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-041709" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:55.209858  635465 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-428797/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-041709" cluster setting kubeconfig missing "newest-cni-041709" context setting]
	I1013 23:18:55.210381  635465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:55.212378  635465 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:18:55.227701  635465 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1013 23:18:55.227783  635465 kubeadm.go:601] duration metric: took 39.582112ms to restartPrimaryControlPlane
	I1013 23:18:55.227808  635465 kubeadm.go:402] duration metric: took 174.531778ms to StartCluster
	I1013 23:18:55.227847  635465 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:55.227942  635465 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:55.229017  635465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:55.229307  635465 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:18:55.229824  635465 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:55.229834  635465 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:18:55.229919  635465 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-041709"
	I1013 23:18:55.229937  635465 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-041709"
	W1013 23:18:55.229950  635465 addons.go:247] addon storage-provisioner should already be in state true
	I1013 23:18:55.229972  635465 addons.go:69] Setting dashboard=true in profile "newest-cni-041709"
	I1013 23:18:55.230049  635465 addons.go:238] Setting addon dashboard=true in "newest-cni-041709"
	W1013 23:18:55.230070  635465 addons.go:247] addon dashboard should already be in state true
	I1013 23:18:55.230127  635465 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:18:55.230742  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.229975  635465 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:18:55.229981  635465 addons.go:69] Setting default-storageclass=true in profile "newest-cni-041709"
	I1013 23:18:55.231418  635465 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-041709"
	I1013 23:18:55.231623  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.231707  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.237715  635465 out.go:179] * Verifying Kubernetes components...
	I1013 23:18:55.241204  635465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:55.292390  635465 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:18:55.295919  635465 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:18:55.295946  635465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:18:55.296012  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:55.296181  635465 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 23:18:55.299151  635465 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 23:18:55.302027  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 23:18:55.302060  635465 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 23:18:55.302144  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:55.303396  635465 addons.go:238] Setting addon default-storageclass=true in "newest-cni-041709"
	W1013 23:18:55.303421  635465 addons.go:247] addon default-storageclass should already be in state true
	I1013 23:18:55.303445  635465 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:18:55.303884  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.347631  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:55.358489  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:55.365461  635465 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:18:55.365486  635465 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:18:55.365565  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:55.395199  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:55.630044  635465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:18:55.648454  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 23:18:55.648519  635465 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 23:18:55.651006  635465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:18:55.652631  635465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:18:55.684755  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 23:18:55.684821  635465 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 23:18:55.707356  635465 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:18:55.707479  635465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:18:55.767725  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 23:18:55.767791  635465 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 23:18:55.824694  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 23:18:55.824761  635465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 23:18:55.875821  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 23:18:55.875896  635465 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 23:18:55.908545  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 23:18:55.908611  635465 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 23:18:55.925007  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 23:18:55.925084  635465 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 23:18:55.943567  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 23:18:55.943637  635465 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 23:18:55.959202  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:18:55.959275  635465 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 23:18:55.973822  635465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 13 23:18:47 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:47.230512471Z" level=info msg="Created container b3b9935a753af48d6558b214cbfa9f834688a22421f2ff19505cdf0a29fbede4: kube-system/coredns-66bc5c9577-qf4lq/coredns" id=2d403027-1baf-40f7-94b1-79ee343eb0d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:18:47 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:47.233245323Z" level=info msg="Starting container: b3b9935a753af48d6558b214cbfa9f834688a22421f2ff19505cdf0a29fbede4" id=549a9f5b-1b83-4b84-a505-311d003c1a5b name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:18:47 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:47.235899867Z" level=info msg="Started container" PID=1758 containerID=b3b9935a753af48d6558b214cbfa9f834688a22421f2ff19505cdf0a29fbede4 description=kube-system/coredns-66bc5c9577-qf4lq/coredns id=549a9f5b-1b83-4b84-a505-311d003c1a5b name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b4db0cb5d4b3ab1140b4f38f0ac52721c47f0b68a90d132bc89e3f7976707b7
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.049073573Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bfce29e5-742e-45c9-8276-3b2fbcc61dc0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.049152849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.059944134Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b9fa25b89e8ca560904636db884feaf50dda3b0bacf991203eabbe17bf0091a9 UID:3ac75256-6e21-451f-a3a2-d6c2cfb61938 NetNS:/var/run/netns/ad126633-1ad7-4733-a73c-0955ef4c3046 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000795e8}] Aliases:map[]}"
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.060003086Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.06914153Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b9fa25b89e8ca560904636db884feaf50dda3b0bacf991203eabbe17bf0091a9 UID:3ac75256-6e21-451f-a3a2-d6c2cfb61938 NetNS:/var/run/netns/ad126633-1ad7-4733-a73c-0955ef4c3046 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000795e8}] Aliases:map[]}"
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.069293921Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.07314561Z" level=info msg="Ran pod sandbox b9fa25b89e8ca560904636db884feaf50dda3b0bacf991203eabbe17bf0091a9 with infra container: default/busybox/POD" id=bfce29e5-742e-45c9-8276-3b2fbcc61dc0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.074280907Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9f82584e-f3d0-4e2f-9718-ec0c3c8194cc name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.074491167Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9f82584e-f3d0-4e2f-9718-ec0c3c8194cc name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.074539544Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9f82584e-f3d0-4e2f-9718-ec0c3c8194cc name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.077635613Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=39b29aa9-4b79-4493-a479-f339ea9b5d33 name=/runtime.v1.ImageService/PullImage
	Oct 13 23:18:50 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:50.083576508Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 23:18:52 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:52.240349549Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=39b29aa9-4b79-4493-a479-f339ea9b5d33 name=/runtime.v1.ImageService/PullImage
	Oct 13 23:18:52 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:52.241122145Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=63db8c90-2196-4f10-9857-a7c484ee04f8 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:18:52 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:52.242559154Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99218573-2980-4db6-80f0-e2f47344ceca name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:18:52 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:52.254721644Z" level=info msg="Creating container: default/busybox/busybox" id=e15adbb3-0c8d-416f-a950-9fdb129d6325 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:18:52 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:52.255556491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:52 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:52.260253973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:52 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:52.260722977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:18:52 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:52.282610877Z" level=info msg="Created container 55bcec666b7ff6ab0196099b037bf4d6d3a458991146989a1902e45918d65bdc: default/busybox/busybox" id=e15adbb3-0c8d-416f-a950-9fdb129d6325 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:18:52 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:52.286371155Z" level=info msg="Starting container: 55bcec666b7ff6ab0196099b037bf4d6d3a458991146989a1902e45918d65bdc" id=a716c84d-8d0d-4e3a-b5b6-ea635c0a0ea5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:18:52 default-k8s-diff-port-033746 crio[837]: time="2025-10-13T23:18:52.291737965Z" level=info msg="Started container" PID=1816 containerID=55bcec666b7ff6ab0196099b037bf4d6d3a458991146989a1902e45918d65bdc description=default/busybox/busybox id=a716c84d-8d0d-4e3a-b5b6-ea635c0a0ea5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9fa25b89e8ca560904636db884feaf50dda3b0bacf991203eabbe17bf0091a9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	55bcec666b7ff       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   b9fa25b89e8ca       busybox                                                default
	b3b9935a753af       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   5b4db0cb5d4b3       coredns-66bc5c9577-qf4lq                               kube-system
	1cf99de3157b3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   74776f3d25411       storage-provisioner                                    kube-system
	48aa3d5a0cbfe       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   ab68456e9571d       kube-proxy-mxnv7                                       kube-system
	c0a77a9951fdc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   3948a4baf20bc       kindnet-vgn6v                                          kube-system
	52ebc90faba67       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   feec063ea9a4f       kube-controller-manager-default-k8s-diff-port-033746   kube-system
	6e4818ff440f2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   cad94f86e16f2       kube-scheduler-default-k8s-diff-port-033746            kube-system
	aaa7bc1eafed4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   84d0c8bd6620b       kube-apiserver-default-k8s-diff-port-033746            kube-system
	2cd1bc93419c9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   252c7350d85a1       etcd-default-k8s-diff-port-033746                      kube-system
	
	
	==> coredns [b3b9935a753af48d6558b214cbfa9f834688a22421f2ff19505cdf0a29fbede4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36112 - 14657 "HINFO IN 3427524571051316497.7087621980772370549. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019850911s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-033746
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-033746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=default-k8s-diff-port-033746
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_17_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-033746
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:19:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:19:00 +0000   Mon, 13 Oct 2025 23:17:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:19:00 +0000   Mon, 13 Oct 2025 23:17:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:19:00 +0000   Mon, 13 Oct 2025 23:17:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:19:00 +0000   Mon, 13 Oct 2025 23:18:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-033746
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                b334b9dc-cabb-43d9-9bf2-cf916bb499bf
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-qf4lq                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-033746                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-vgn6v                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-default-k8s-diff-port-033746             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-033746    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-mxnv7                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-default-k8s-diff-port-033746             100m (5%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-033746 event: Registered Node default-k8s-diff-port-033746 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-033746 status is now: NodeReady
	
	
	==> dmesg <==
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	[Oct13 23:16] overlayfs: idmapped layers are currently not supported
	[ +36.293322] overlayfs: idmapped layers are currently not supported
	[Oct13 23:17] overlayfs: idmapped layers are currently not supported
	[Oct13 23:18] overlayfs: idmapped layers are currently not supported
	[ +26.588739] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2cd1bc93419c9201679782e31925990f5dc8538364fb89582eb6ecd30aaba4eb] <==
	{"level":"warn","ts":"2025-10-13T23:17:54.603464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.638434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.640887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.660908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.683459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.696569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.722428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.756768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.779689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.795008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.815188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.832921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.869725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.875017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.891753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.909311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.936187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.951572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.970518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:54.997179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:55.022665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:55.040405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:55.058448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:17:55.180183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40638","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T23:18:07.112162Z","caller":"traceutil/trace.go:172","msg":"trace[567015016] transaction","detail":"{read_only:false; number_of_response:1; response_revision:416; }","duration":"101.892241ms","start":"2025-10-13T23:18:07.010243Z","end":"2025-10-13T23:18:07.112135Z","steps":["trace[567015016] 'process raft request'  (duration: 63.216854ms)","trace[567015016] 'compare'  (duration: 38.316691ms)"],"step_count":2}
	
	
	==> kernel <==
	 23:19:01 up  3:01,  0 user,  load average: 3.62, 3.52, 2.84
	Linux default-k8s-diff-port-033746 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c0a77a9951fdc3117fe2b2f36ab94916f121ee4cd7186415af0b29050bb0f72b] <==
	I1013 23:18:05.806744       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:18:05.807584       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:18:05.807732       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:18:05.807744       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:18:05.807803       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:18:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:18:06.012042       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:18:06.012067       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:18:06.012076       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:18:06.012432       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:18:36.008332       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 23:18:36.013143       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:18:36.013250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 23:18:36.013334       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1013 23:18:37.612274       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:18:37.612310       1 metrics.go:72] Registering metrics
	I1013 23:18:37.612381       1 controller.go:711] "Syncing nftables rules"
	I1013 23:18:46.011473       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:18:46.011657       1 main.go:301] handling current node
	I1013 23:18:56.007265       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:18:56.007387       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aaa7bc1eafed4c34f0cd32c6e4fa85724cba85fb07a09d05a11bbd80f77c0006] <==
	I1013 23:17:56.307465       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1013 23:17:56.317744       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:17:56.317911       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 23:17:56.331659       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 23:17:56.331770       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1013 23:17:56.331891       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1013 23:17:56.342903       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 23:17:56.543663       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:17:57.012681       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 23:17:57.021697       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 23:17:57.021722       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:17:58.051459       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:17:58.156198       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:17:58.221951       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 23:17:58.234336       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1013 23:17:58.237362       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:17:58.254377       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 23:17:59.057607       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:17:59.064793       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:17:59.121552       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 23:17:59.152632       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 23:18:04.336048       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:18:04.985298       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 23:18:05.081215       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:18:05.102539       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [52ebc90faba67ab03e6c981361751ec873a849a59da3e72f822875da05a6ee2a] <==
	I1013 23:18:04.153272       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:18:04.153380       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 23:18:04.161282       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-033746" podCIDRs=["10.244.0.0/24"]
	I1013 23:18:04.161519       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 23:18:04.133873       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 23:18:04.170780       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:18:04.132252       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:18:04.167075       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 23:18:04.174458       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 23:18:04.178977       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 23:18:04.185369       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:18:04.185443       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 23:18:04.185492       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 23:18:04.188134       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 23:18:04.188283       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 23:18:04.188388       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 23:18:04.188508       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-033746"
	I1013 23:18:04.188582       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 23:18:04.188643       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 23:18:04.188857       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 23:18:04.196363       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 23:18:04.229152       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:18:04.229261       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 23:18:04.229268       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 23:18:49.195118       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [48aa3d5a0cbfe2618682d566832555a0264874dc6849a2541a22a392357c8253] <==
	I1013 23:18:06.230750       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:18:06.320190       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:18:06.422315       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:18:06.422374       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 23:18:06.422483       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:18:06.506921       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:18:06.506970       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:18:06.514375       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:18:06.514704       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:18:06.514721       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:18:06.523845       1 config.go:200] "Starting service config controller"
	I1013 23:18:06.523866       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:18:06.523884       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:18:06.523888       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:18:06.523899       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:18:06.523905       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:18:06.539824       1 config.go:309] "Starting node config controller"
	I1013 23:18:06.539847       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:18:06.539861       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:18:06.630501       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:18:06.630550       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:18:06.630578       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6e4818ff440f2768d2ec94bb811565a71ab9c2a8da986f3b9357c5ae6fade9e0] <==
	E1013 23:17:56.295483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 23:17:56.295521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 23:17:56.295571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 23:17:56.295604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 23:17:56.295637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 23:17:56.295671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 23:17:56.295707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 23:17:56.295745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 23:17:57.158022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 23:17:57.180193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 23:17:57.236904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 23:17:57.253884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 23:17:57.255222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 23:17:57.274100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 23:17:57.280178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 23:17:57.316861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 23:17:57.354929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 23:17:57.391455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 23:17:57.469590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 23:17:57.478457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 23:17:57.490196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 23:17:57.549312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 23:17:57.634294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1013 23:17:57.687504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1013 23:18:00.078114       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:18:00 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:00.511516    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-033746" podStartSLOduration=1.51149113 podStartE2EDuration="1.51149113s" podCreationTimestamp="2025-10-13 23:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:00.493395566 +0000 UTC m=+1.482344648" watchObservedRunningTime="2025-10-13 23:18:00.51149113 +0000 UTC m=+1.500440204"
	Oct 13 23:18:00 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:00.543987    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-033746" podStartSLOduration=1.543966272 podStartE2EDuration="1.543966272s" podCreationTimestamp="2025-10-13 23:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:00.515673348 +0000 UTC m=+1.504622430" watchObservedRunningTime="2025-10-13 23:18:00.543966272 +0000 UTC m=+1.532915362"
	Oct 13 23:18:04 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:04.157877    1313 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 13 23:18:04 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:04.158436    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 13 23:18:05 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:05.128201    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6a27f223-9eda-4489-a432-bd17dffee02c-cni-cfg\") pod \"kindnet-vgn6v\" (UID: \"6a27f223-9eda-4489-a432-bd17dffee02c\") " pod="kube-system/kindnet-vgn6v"
	Oct 13 23:18:05 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:05.128403    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf8wg\" (UniqueName: \"kubernetes.io/projected/6a27f223-9eda-4489-a432-bd17dffee02c-kube-api-access-sf8wg\") pod \"kindnet-vgn6v\" (UID: \"6a27f223-9eda-4489-a432-bd17dffee02c\") " pod="kube-system/kindnet-vgn6v"
	Oct 13 23:18:05 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:05.128516    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a27f223-9eda-4489-a432-bd17dffee02c-lib-modules\") pod \"kindnet-vgn6v\" (UID: \"6a27f223-9eda-4489-a432-bd17dffee02c\") " pod="kube-system/kindnet-vgn6v"
	Oct 13 23:18:05 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:05.128601    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a27f223-9eda-4489-a432-bd17dffee02c-xtables-lock\") pod \"kindnet-vgn6v\" (UID: \"6a27f223-9eda-4489-a432-bd17dffee02c\") " pod="kube-system/kindnet-vgn6v"
	Oct 13 23:18:05 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:05.231801    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec497b3c-7371-4a5d-a3ac-be5240db89ca-lib-modules\") pod \"kube-proxy-mxnv7\" (UID: \"ec497b3c-7371-4a5d-a3ac-be5240db89ca\") " pod="kube-system/kube-proxy-mxnv7"
	Oct 13 23:18:05 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:05.231869    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvkj\" (UniqueName: \"kubernetes.io/projected/ec497b3c-7371-4a5d-a3ac-be5240db89ca-kube-api-access-dvvkj\") pod \"kube-proxy-mxnv7\" (UID: \"ec497b3c-7371-4a5d-a3ac-be5240db89ca\") " pod="kube-system/kube-proxy-mxnv7"
	Oct 13 23:18:05 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:05.231900    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec497b3c-7371-4a5d-a3ac-be5240db89ca-kube-proxy\") pod \"kube-proxy-mxnv7\" (UID: \"ec497b3c-7371-4a5d-a3ac-be5240db89ca\") " pod="kube-system/kube-proxy-mxnv7"
	Oct 13 23:18:05 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:05.231916    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec497b3c-7371-4a5d-a3ac-be5240db89ca-xtables-lock\") pod \"kube-proxy-mxnv7\" (UID: \"ec497b3c-7371-4a5d-a3ac-be5240db89ca\") " pod="kube-system/kube-proxy-mxnv7"
	Oct 13 23:18:05 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:05.299342    1313 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 23:18:06 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:06.628345    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vgn6v" podStartSLOduration=2.628273815 podStartE2EDuration="2.628273815s" podCreationTimestamp="2025-10-13 23:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:06.569831816 +0000 UTC m=+7.558780898" watchObservedRunningTime="2025-10-13 23:18:06.628273815 +0000 UTC m=+7.617222889"
	Oct 13 23:18:06 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:06.628482    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mxnv7" podStartSLOduration=2.6284741719999998 podStartE2EDuration="2.628474172s" podCreationTimestamp="2025-10-13 23:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:06.628023744 +0000 UTC m=+7.616972818" watchObservedRunningTime="2025-10-13 23:18:06.628474172 +0000 UTC m=+7.617423262"
	Oct 13 23:18:46 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:46.439877    1313 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 23:18:46 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:46.682722    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bba169b1-b8a2-40d0-aa47-6ee1369a7107-tmp\") pod \"storage-provisioner\" (UID: \"bba169b1-b8a2-40d0-aa47-6ee1369a7107\") " pod="kube-system/storage-provisioner"
	Oct 13 23:18:46 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:46.682799    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a75d4ff9-259b-4a0c-9c05-ce8343096549-config-volume\") pod \"coredns-66bc5c9577-qf4lq\" (UID: \"a75d4ff9-259b-4a0c-9c05-ce8343096549\") " pod="kube-system/coredns-66bc5c9577-qf4lq"
	Oct 13 23:18:46 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:46.682832    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp8hw\" (UniqueName: \"kubernetes.io/projected/bba169b1-b8a2-40d0-aa47-6ee1369a7107-kube-api-access-kp8hw\") pod \"storage-provisioner\" (UID: \"bba169b1-b8a2-40d0-aa47-6ee1369a7107\") " pod="kube-system/storage-provisioner"
	Oct 13 23:18:46 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:46.682872    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9khqd\" (UniqueName: \"kubernetes.io/projected/a75d4ff9-259b-4a0c-9c05-ce8343096549-kube-api-access-9khqd\") pod \"coredns-66bc5c9577-qf4lq\" (UID: \"a75d4ff9-259b-4a0c-9c05-ce8343096549\") " pod="kube-system/coredns-66bc5c9577-qf4lq"
	Oct 13 23:18:47 default-k8s-diff-port-033746 kubelet[1313]: W1013 23:18:47.115790    1313 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/crio-74776f3d25411ee0a93e74990215a2af4a26ea4b761ec7a9c20966639d26d563 WatchSource:0}: Error finding container 74776f3d25411ee0a93e74990215a2af4a26ea4b761ec7a9c20966639d26d563: Status 404 returned error can't find the container with id 74776f3d25411ee0a93e74990215a2af4a26ea4b761ec7a9c20966639d26d563
	Oct 13 23:18:47 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:47.677787    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.677757401 podStartE2EDuration="40.677757401s" podCreationTimestamp="2025-10-13 23:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:47.657414208 +0000 UTC m=+48.646363282" watchObservedRunningTime="2025-10-13 23:18:47.677757401 +0000 UTC m=+48.666706474"
	Oct 13 23:18:49 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:49.739548    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qf4lq" podStartSLOduration=44.739528151 podStartE2EDuration="44.739528151s" podCreationTimestamp="2025-10-13 23:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 23:18:47.679202804 +0000 UTC m=+48.668151894" watchObservedRunningTime="2025-10-13 23:18:49.739528151 +0000 UTC m=+50.728477233"
	Oct 13 23:18:49 default-k8s-diff-port-033746 kubelet[1313]: I1013 23:18:49.804964    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt5qp\" (UniqueName: \"kubernetes.io/projected/3ac75256-6e21-451f-a3a2-d6c2cfb61938-kube-api-access-rt5qp\") pod \"busybox\" (UID: \"3ac75256-6e21-451f-a3a2-d6c2cfb61938\") " pod="default/busybox"
	Oct 13 23:18:50 default-k8s-diff-port-033746 kubelet[1313]: W1013 23:18:50.071273    1313 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/crio-b9fa25b89e8ca560904636db884feaf50dda3b0bacf991203eabbe17bf0091a9 WatchSource:0}: Error finding container b9fa25b89e8ca560904636db884feaf50dda3b0bacf991203eabbe17bf0091a9: Status 404 returned error can't find the container with id b9fa25b89e8ca560904636db884feaf50dda3b0bacf991203eabbe17bf0091a9
	
	
	==> storage-provisioner [1cf99de3157b3b5da1b1320fa01ae194be98bec998cbb7aacc1bd6fe097d7038] <==
	I1013 23:18:47.269364       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 23:18:47.308208       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 23:18:47.311430       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 23:18:47.375917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:47.384682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:18:47.384906       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 23:18:47.386609       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-033746_98287940-a706-41d5-a5cf-829fcfdc67ee!
	I1013 23:18:47.386433       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23ecfaf4-4360-4145-8e41-5e272b4a7add", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-033746_98287940-a706-41d5-a5cf-829fcfdc67ee became leader
	W1013 23:18:47.399560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:47.404272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 23:18:47.487370       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-033746_98287940-a706-41d5-a5cf-829fcfdc67ee!
	W1013 23:18:49.418802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:49.426395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:51.429797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:51.435644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:53.448292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:53.454575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:55.458403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:55.471278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:57.474822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:57.480834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:59.487837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:18:59.503029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:19:01.514516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:19:01.531423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-033746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-041709 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-041709 --alsologtostderr -v=1: exit status 80 (1.948310006s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-041709 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 23:19:04.570999  637954 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:19:04.571322  637954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:19:04.571353  637954 out.go:374] Setting ErrFile to fd 2...
	I1013 23:19:04.571373  637954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:19:04.571661  637954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:19:04.571957  637954 out.go:368] Setting JSON to false
	I1013 23:19:04.572007  637954 mustload.go:65] Loading cluster: newest-cni-041709
	I1013 23:19:04.572482  637954 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:19:04.573015  637954 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:19:04.590930  637954 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:19:04.591396  637954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:19:04.662364  637954 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-13 23:19:04.651613646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:19:04.663020  637954 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-041709 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 23:19:04.668810  637954 out.go:179] * Pausing node newest-cni-041709 ... 
	I1013 23:19:04.671831  637954 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:19:04.672178  637954 ssh_runner.go:195] Run: systemctl --version
	I1013 23:19:04.672232  637954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:19:04.688883  637954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:19:04.801857  637954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:19:04.815759  637954 pause.go:52] kubelet running: true
	I1013 23:19:04.815826  637954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:19:05.038743  637954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:19:05.038839  637954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:19:05.118495  637954 cri.go:89] found id: "13e7044da7f364b31d167202fbbdd375875538232ed14d416f92a00965983c0f"
	I1013 23:19:05.118523  637954 cri.go:89] found id: "62e78df64abfea9ae04654333b1d6f6bfa0cb7ef2dcafd84f8068abb6e49bf7b"
	I1013 23:19:05.118528  637954 cri.go:89] found id: "ff97608ecad8678ef83e1cca9d995096860465b562e07d958b0f6db3f4e80297"
	I1013 23:19:05.118532  637954 cri.go:89] found id: "f585eee05e276e63d2044fb2ed0672a9197d1aaaacfa329137bbffb7a6fe644d"
	I1013 23:19:05.118536  637954 cri.go:89] found id: "44d53fd79812b8d922ad72b9ac9d25226207caf5d09d5678a9138d64ac33674c"
	I1013 23:19:05.118541  637954 cri.go:89] found id: "d86542dd3227b9d6eee466227d0a22afeced1086ec52a4e64f20f7da5d9ce81e"
	I1013 23:19:05.118544  637954 cri.go:89] found id: ""
	I1013 23:19:05.118602  637954 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:19:05.130012  637954 retry.go:31] will retry after 228.669302ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:19:05Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:19:05.359497  637954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:19:05.373174  637954 pause.go:52] kubelet running: false
	I1013 23:19:05.373314  637954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:19:05.533844  637954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:19:05.533968  637954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:19:05.647774  637954 cri.go:89] found id: "13e7044da7f364b31d167202fbbdd375875538232ed14d416f92a00965983c0f"
	I1013 23:19:05.647841  637954 cri.go:89] found id: "62e78df64abfea9ae04654333b1d6f6bfa0cb7ef2dcafd84f8068abb6e49bf7b"
	I1013 23:19:05.647860  637954 cri.go:89] found id: "ff97608ecad8678ef83e1cca9d995096860465b562e07d958b0f6db3f4e80297"
	I1013 23:19:05.647877  637954 cri.go:89] found id: "f585eee05e276e63d2044fb2ed0672a9197d1aaaacfa329137bbffb7a6fe644d"
	I1013 23:19:05.647897  637954 cri.go:89] found id: "44d53fd79812b8d922ad72b9ac9d25226207caf5d09d5678a9138d64ac33674c"
	I1013 23:19:05.647927  637954 cri.go:89] found id: "d86542dd3227b9d6eee466227d0a22afeced1086ec52a4e64f20f7da5d9ce81e"
	I1013 23:19:05.647950  637954 cri.go:89] found id: ""
	I1013 23:19:05.648028  637954 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:19:05.661824  637954 retry.go:31] will retry after 412.036618ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:19:05Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:19:06.074455  637954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:19:06.096445  637954 pause.go:52] kubelet running: false
	I1013 23:19:06.096512  637954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:19:06.292972  637954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:19:06.293056  637954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:19:06.375436  637954 cri.go:89] found id: "13e7044da7f364b31d167202fbbdd375875538232ed14d416f92a00965983c0f"
	I1013 23:19:06.375510  637954 cri.go:89] found id: "62e78df64abfea9ae04654333b1d6f6bfa0cb7ef2dcafd84f8068abb6e49bf7b"
	I1013 23:19:06.375531  637954 cri.go:89] found id: "ff97608ecad8678ef83e1cca9d995096860465b562e07d958b0f6db3f4e80297"
	I1013 23:19:06.375551  637954 cri.go:89] found id: "f585eee05e276e63d2044fb2ed0672a9197d1aaaacfa329137bbffb7a6fe644d"
	I1013 23:19:06.375585  637954 cri.go:89] found id: "44d53fd79812b8d922ad72b9ac9d25226207caf5d09d5678a9138d64ac33674c"
	I1013 23:19:06.375609  637954 cri.go:89] found id: "d86542dd3227b9d6eee466227d0a22afeced1086ec52a4e64f20f7da5d9ce81e"
	I1013 23:19:06.375629  637954 cri.go:89] found id: ""
	I1013 23:19:06.375711  637954 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:19:06.392019  637954 out.go:203] 
	W1013 23:19:06.395133  637954 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:19:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:19:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 23:19:06.395166  637954 out.go:285] * 
	* 
	W1013 23:19:06.402561  637954 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 23:19:06.407627  637954 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-041709 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-041709
helpers_test.go:243: (dbg) docker inspect newest-cni-041709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd",
	        "Created": "2025-10-13T23:18:08.094436918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 635599,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:18:46.994437684Z",
	            "FinishedAt": "2025-10-13T23:18:45.926059869Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/hosts",
	        "LogPath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd-json.log",
	        "Name": "/newest-cni-041709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-041709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-041709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd",
	                "LowerDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-041709",
	                "Source": "/var/lib/docker/volumes/newest-cni-041709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-041709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-041709",
	                "name.minikube.sigs.k8s.io": "newest-cni-041709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f7c33feec07c4d3c86a89a089af3c910301800ce374427b728858f27ad99b92b",
	            "SandboxKey": "/var/run/docker/netns/f7c33feec07c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-041709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:ab:f2:b0:49:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3df7c953cf9f4e0e97cdf9e47b4f15792247e0d1f7edb011f023caaa15ec476f",
	                    "EndpointID": "68e6ca1ea11907448a1fbdc141d752ccb52016ca76d93deee98944a86641f5cf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-041709",
	                        "06492791cd8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-041709 -n newest-cni-041709
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-041709 -n newest-cni-041709: exit status 2 (346.006661ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-041709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-041709 logs -n 25: (1.053909422s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-985461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	│ stop    │ -p embed-certs-505482 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-505482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:17 UTC │
	│ image   │ no-preload-985461 image list --format=json                                                                                                                                                                                                    │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p no-preload-985461 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p disable-driver-mounts-320520                                                                                                                                                                                                               │ disable-driver-mounts-320520 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ start   │ -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:18 UTC │
	│ image   │ embed-certs-505482 image list --format=json                                                                                                                                                                                                   │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p embed-certs-505482 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:18 UTC │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ start   │ -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ addons  │ enable metrics-server -p newest-cni-041709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	│ stop    │ -p newest-cni-041709 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-041709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ start   │ -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-033746 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-033746 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │                     │
	│ image   │ newest-cni-041709 image list --format=json                                                                                                                                                                                                    │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ pause   │ -p newest-cni-041709 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:18:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:18:46.659625  635465 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:18:46.659759  635465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:18:46.659771  635465 out.go:374] Setting ErrFile to fd 2...
	I1013 23:18:46.659778  635465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:18:46.660033  635465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:18:46.660394  635465 out.go:368] Setting JSON to false
	I1013 23:18:46.661326  635465 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10863,"bootTime":1760386664,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:18:46.661395  635465 start.go:141] virtualization:  
	I1013 23:18:46.665006  635465 out.go:179] * [newest-cni-041709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:18:46.668932  635465 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:18:46.668996  635465 notify.go:220] Checking for updates...
	I1013 23:18:46.674873  635465 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:18:46.677956  635465 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:46.680967  635465 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:18:46.684029  635465 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:18:46.686838  635465 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:18:46.690305  635465 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:46.690950  635465 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:18:46.730001  635465 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:18:46.730151  635465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:18:46.812994  635465 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:18:46.793285508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:18:46.813104  635465 docker.go:318] overlay module found
	I1013 23:18:46.817039  635465 out.go:179] * Using the docker driver based on existing profile
	I1013 23:18:46.820013  635465 start.go:305] selected driver: docker
	I1013 23:18:46.820036  635465 start.go:925] validating driver "docker" against &{Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:18:46.820133  635465 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:18:46.820838  635465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:18:46.888064  635465 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:18:46.878929832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:18:46.888406  635465 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 23:18:46.888461  635465 cni.go:84] Creating CNI manager for ""
	I1013 23:18:46.888520  635465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:18:46.888562  635465 start.go:349] cluster config:
	{Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:18:46.892201  635465 out.go:179] * Starting "newest-cni-041709" primary control-plane node in "newest-cni-041709" cluster
	I1013 23:18:46.895353  635465 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:18:46.898524  635465 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:18:46.901626  635465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:18:46.901707  635465 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:18:46.901713  635465 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:18:46.901723  635465 cache.go:58] Caching tarball of preloaded images
	I1013 23:18:46.901838  635465 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:18:46.901850  635465 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:18:46.901994  635465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/config.json ...
	I1013 23:18:46.932671  635465 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:18:46.932697  635465 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:18:46.932710  635465 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:18:46.932738  635465 start.go:360] acquireMachinesLock for newest-cni-041709: {Name:mk550fb39e8064c08d6ccaf342c21fc53a30808d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:18:46.932799  635465 start.go:364] duration metric: took 35.913µs to acquireMachinesLock for "newest-cni-041709"
	I1013 23:18:46.932823  635465 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:18:46.932842  635465 fix.go:54] fixHost starting: 
	I1013 23:18:46.933108  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:46.956430  635465 fix.go:112] recreateIfNeeded on newest-cni-041709: state=Stopped err=<nil>
	W1013 23:18:46.956463  635465 fix.go:138] unexpected machine state, will restart: <nil>
	W1013 23:18:43.907614  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	W1013 23:18:46.408020  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	I1013 23:18:46.908290  628422 node_ready.go:49] node "default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:46.908317  628422 node_ready.go:38] duration metric: took 40.004370462s for node "default-k8s-diff-port-033746" to be "Ready" ...
	I1013 23:18:46.908330  628422 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:18:46.908478  628422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:18:46.923250  628422 api_server.go:72] duration metric: took 41.358831182s to wait for apiserver process to appear ...
	I1013 23:18:46.923281  628422 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:18:46.923305  628422 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1013 23:18:46.935604  628422 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1013 23:18:46.936939  628422 api_server.go:141] control plane version: v1.34.1
	I1013 23:18:46.936961  628422 api_server.go:131] duration metric: took 13.673442ms to wait for apiserver health ...
	I1013 23:18:46.936970  628422 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:18:46.941226  628422 system_pods.go:59] 8 kube-system pods found
	I1013 23:18:46.941259  628422 system_pods.go:61] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:46.941266  628422 system_pods.go:61] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:46.941273  628422 system_pods.go:61] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:46.941278  628422 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:46.941283  628422 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:46.941287  628422 system_pods.go:61] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:46.941292  628422 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:46.941297  628422 system_pods.go:61] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:46.941305  628422 system_pods.go:74] duration metric: took 4.329029ms to wait for pod list to return data ...
	I1013 23:18:46.941312  628422 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:18:46.944488  628422 default_sa.go:45] found service account: "default"
	I1013 23:18:46.944516  628422 default_sa.go:55] duration metric: took 3.197368ms for default service account to be created ...
	I1013 23:18:46.944526  628422 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:18:46.950031  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:46.950073  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:46.950081  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:46.950087  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:46.950092  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:46.950097  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:46.950101  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:46.950106  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:46.950112  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:46.950139  628422 retry.go:31] will retry after 198.826906ms: missing components: kube-dns
	I1013 23:18:47.153958  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:47.153990  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:47.153998  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:47.154004  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:47.154008  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:47.154012  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:47.154017  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:47.154020  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:47.154026  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:47.154041  628422 retry.go:31] will retry after 287.091453ms: missing components: kube-dns
	I1013 23:18:47.445492  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:47.445522  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:47.445530  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:47.445537  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:47.445542  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:47.445546  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:47.445550  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:47.445554  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:47.445560  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:47.445574  628422 retry.go:31] will retry after 372.489262ms: missing components: kube-dns
	I1013 23:18:47.822900  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:47.822989  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Running
	I1013 23:18:47.823015  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:47.823037  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:47.823064  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:47.823169  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:47.823209  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:47.823238  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:47.823260  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Running
	I1013 23:18:47.823291  628422 system_pods.go:126] duration metric: took 878.758193ms to wait for k8s-apps to be running ...
	I1013 23:18:47.823314  628422 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:18:47.823387  628422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:18:47.840918  628422 system_svc.go:56] duration metric: took 17.596072ms WaitForService to wait for kubelet
	I1013 23:18:47.840951  628422 kubeadm.go:586] duration metric: took 42.276544463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:18:47.840971  628422 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:18:47.845175  628422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:18:47.845214  628422 node_conditions.go:123] node cpu capacity is 2
	I1013 23:18:47.845227  628422 node_conditions.go:105] duration metric: took 4.251164ms to run NodePressure ...
	I1013 23:18:47.845240  628422 start.go:241] waiting for startup goroutines ...
	I1013 23:18:47.845248  628422 start.go:246] waiting for cluster config update ...
	I1013 23:18:47.845259  628422 start.go:255] writing updated cluster config ...
	I1013 23:18:47.845569  628422 ssh_runner.go:195] Run: rm -f paused
	I1013 23:18:47.851978  628422 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:18:47.855576  628422 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qf4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.860797  628422 pod_ready.go:94] pod "coredns-66bc5c9577-qf4lq" is "Ready"
	I1013 23:18:47.860825  628422 pod_ready.go:86] duration metric: took 5.221377ms for pod "coredns-66bc5c9577-qf4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.863794  628422 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.869540  628422 pod_ready.go:94] pod "etcd-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:47.869581  628422 pod_ready.go:86] duration metric: took 5.758982ms for pod "etcd-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.872429  628422 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.877409  628422 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:47.877436  628422 pod_ready.go:86] duration metric: took 4.943697ms for pod "kube-apiserver-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.880160  628422 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:48.256569  628422 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:48.256600  628422 pod_ready.go:86] duration metric: took 376.414834ms for pod "kube-controller-manager-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:48.455997  628422 pod_ready.go:83] waiting for pod "kube-proxy-mxnv7" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:48.857167  628422 pod_ready.go:94] pod "kube-proxy-mxnv7" is "Ready"
	I1013 23:18:48.857199  628422 pod_ready.go:86] duration metric: took 401.173799ms for pod "kube-proxy-mxnv7" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:49.056104  628422 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:49.456155  628422 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:49.456184  628422 pod_ready.go:86] duration metric: took 400.055996ms for pod "kube-scheduler-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:49.456198  628422 pod_ready.go:40] duration metric: took 1.604180795s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:18:49.517347  628422 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:18:49.520660  628422 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-033746" cluster and "default" namespace by default
	I1013 23:18:46.959948  635465 out.go:252] * Restarting existing docker container for "newest-cni-041709" ...
	I1013 23:18:46.960041  635465 cli_runner.go:164] Run: docker start newest-cni-041709
	I1013 23:18:47.272614  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:47.301351  635465 kic.go:430] container "newest-cni-041709" state is running.
	I1013 23:18:47.302387  635465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:47.336818  635465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/config.json ...
	I1013 23:18:47.337080  635465 machine.go:93] provisionDockerMachine start ...
	I1013 23:18:47.337156  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:47.361341  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:47.361673  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:47.361689  635465 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:18:47.362232  635465 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54710->127.0.0.1:33484: read: connection reset by peer
	I1013 23:18:50.506943  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-041709
	
	I1013 23:18:50.506971  635465 ubuntu.go:182] provisioning hostname "newest-cni-041709"
	I1013 23:18:50.507038  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:50.529919  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:50.530245  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:50.530263  635465 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-041709 && echo "newest-cni-041709" | sudo tee /etc/hostname
	I1013 23:18:50.684921  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-041709
	
	I1013 23:18:50.685001  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:50.704082  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:50.704403  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:50.704447  635465 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-041709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-041709/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-041709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:18:50.859361  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:18:50.859386  635465 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:18:50.859412  635465 ubuntu.go:190] setting up certificates
	I1013 23:18:50.859421  635465 provision.go:84] configureAuth start
	I1013 23:18:50.859479  635465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:50.876142  635465 provision.go:143] copyHostCerts
	I1013 23:18:50.876213  635465 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:18:50.876241  635465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:18:50.876320  635465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:18:50.876465  635465 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:18:50.876477  635465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:18:50.876508  635465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:18:50.876577  635465 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:18:50.876587  635465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:18:50.876613  635465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:18:50.876675  635465 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.newest-cni-041709 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-041709]
	I1013 23:18:51.531424  635465 provision.go:177] copyRemoteCerts
	I1013 23:18:51.531540  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:18:51.531626  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:51.550654  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:51.658836  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:18:51.677867  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 23:18:51.695565  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 23:18:51.718317  635465 provision.go:87] duration metric: took 858.871562ms to configureAuth
	I1013 23:18:51.718407  635465 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:18:51.718639  635465 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:51.718821  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:51.739263  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:51.739570  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:51.739597  635465 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:18:52.103542  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:18:52.103629  635465 machine.go:96] duration metric: took 4.766529819s to provisionDockerMachine
	I1013 23:18:52.103655  635465 start.go:293] postStartSetup for "newest-cni-041709" (driver="docker")
	I1013 23:18:52.103680  635465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:18:52.103769  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:18:52.103834  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.134085  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.243909  635465 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:18:52.248277  635465 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:18:52.248316  635465 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:18:52.248328  635465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:18:52.248403  635465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:18:52.248560  635465 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:18:52.248714  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:18:52.265882  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:18:52.295454  635465 start.go:296] duration metric: took 191.769349ms for postStartSetup
	I1013 23:18:52.295552  635465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:18:52.295635  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.314699  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.416219  635465 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:18:52.421219  635465 fix.go:56] duration metric: took 5.488375753s for fixHost
	I1013 23:18:52.421253  635465 start.go:83] releasing machines lock for "newest-cni-041709", held for 5.488442081s
	I1013 23:18:52.421386  635465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:52.447747  635465 ssh_runner.go:195] Run: cat /version.json
	I1013 23:18:52.447805  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.447830  635465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:18:52.447892  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.472897  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.484151  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.583128  635465 ssh_runner.go:195] Run: systemctl --version
	I1013 23:18:52.675301  635465 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:18:52.730783  635465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:18:52.736216  635465 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:18:52.736295  635465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:18:52.744350  635465 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:18:52.744460  635465 start.go:495] detecting cgroup driver to use...
	I1013 23:18:52.744522  635465 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:18:52.744593  635465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:18:52.760915  635465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:18:52.775406  635465 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:18:52.775473  635465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:18:52.791809  635465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:18:52.805702  635465 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:18:52.932808  635465 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:18:53.078511  635465 docker.go:234] disabling docker service ...
	I1013 23:18:53.078575  635465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:18:53.096051  635465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:18:53.111550  635465 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:18:53.239198  635465 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:18:53.365077  635465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:18:53.379354  635465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:18:53.393225  635465 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:18:53.393321  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.402665  635465 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:18:53.402754  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.412000  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.421032  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.430178  635465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:18:53.446415  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.461458  635465 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.471380  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.480834  635465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:18:53.488886  635465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:18:53.496978  635465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:53.632607  635465 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:18:53.798233  635465 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:18:53.798351  635465 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:18:53.802403  635465 start.go:563] Will wait 60s for crictl version
	I1013 23:18:53.802545  635465 ssh_runner.go:195] Run: which crictl
	I1013 23:18:53.806255  635465 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:18:53.831293  635465 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:18:53.831455  635465 ssh_runner.go:195] Run: crio --version
	I1013 23:18:53.861533  635465 ssh_runner.go:195] Run: crio --version
	I1013 23:18:53.892208  635465 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:18:53.894956  635465 cli_runner.go:164] Run: docker network inspect newest-cni-041709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:18:53.910736  635465 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 23:18:53.914876  635465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:18:53.929173  635465 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1013 23:18:53.931924  635465 kubeadm.go:883] updating cluster {Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:18:53.932080  635465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:18:53.932170  635465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:18:53.968015  635465 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:18:53.968040  635465 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:18:53.968096  635465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:18:53.995316  635465 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:18:53.995344  635465 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:18:53.995353  635465 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 23:18:53.995455  635465 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-041709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:18:53.995543  635465 ssh_runner.go:195] Run: crio config
	I1013 23:18:54.083016  635465 cni.go:84] Creating CNI manager for ""
	I1013 23:18:54.083173  635465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:18:54.083216  635465 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 23:18:54.083275  635465 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-041709 NodeName:newest-cni-041709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:18:54.083418  635465 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-041709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:18:54.083495  635465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:18:54.092035  635465 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:18:54.092108  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:18:54.100446  635465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 23:18:54.113935  635465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:18:54.128703  635465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1013 23:18:54.145028  635465 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:18:54.149031  635465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:18:54.159321  635465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:54.292995  635465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:18:54.310744  635465 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709 for IP: 192.168.76.2
	I1013 23:18:54.310802  635465 certs.go:195] generating shared ca certs ...
	I1013 23:18:54.310842  635465 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:54.311021  635465 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:18:54.311158  635465 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:18:54.311211  635465 certs.go:257] generating profile certs ...
	I1013 23:18:54.311334  635465 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/client.key
	I1013 23:18:54.311450  635465 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key.01857a96
	I1013 23:18:54.311534  635465 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.key
	I1013 23:18:54.311673  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:18:54.311741  635465 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:18:54.311778  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:18:54.311831  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:18:54.311886  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:18:54.311951  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:18:54.312039  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:18:54.312871  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:18:54.332293  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:18:54.350249  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:18:54.367756  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:18:54.385471  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 23:18:54.403134  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 23:18:54.427326  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:18:54.452916  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 23:18:54.475220  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:18:54.503057  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:18:54.528203  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:18:54.549801  635465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:18:54.563631  635465 ssh_runner.go:195] Run: openssl version
	I1013 23:18:54.570495  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:18:54.579237  635465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:18:54.583560  635465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:18:54.583693  635465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:18:54.629672  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:18:54.637997  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:18:54.646667  635465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:54.651345  635465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:54.651410  635465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:54.694293  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:18:54.703284  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:18:54.712284  635465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:18:54.717383  635465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:18:54.717493  635465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:18:54.759219  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:18:54.767198  635465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:18:54.771070  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:18:54.812150  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:18:54.853770  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:18:54.895147  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:18:54.939997  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:18:54.993659  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:18:55.053286  635465 kubeadm.go:400] StartCluster: {Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:18:55.053441  635465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:18:55.053552  635465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:18:55.139374  635465 cri.go:89] found id: "44d53fd79812b8d922ad72b9ac9d25226207caf5d09d5678a9138d64ac33674c"
	I1013 23:18:55.139447  635465 cri.go:89] found id: "d86542dd3227b9d6eee466227d0a22afeced1086ec52a4e64f20f7da5d9ce81e"
	I1013 23:18:55.139465  635465 cri.go:89] found id: ""
	I1013 23:18:55.139546  635465 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 23:18:55.167760  635465 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:18:55Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:18:55.167914  635465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:18:55.188111  635465 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:18:55.188180  635465 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:18:55.188260  635465 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:18:55.208830  635465 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:18:55.209520  635465 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-041709" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:55.209858  635465 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-428797/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-041709" cluster setting kubeconfig missing "newest-cni-041709" context setting]
	I1013 23:18:55.210381  635465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:55.212378  635465 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:18:55.227701  635465 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1013 23:18:55.227783  635465 kubeadm.go:601] duration metric: took 39.582112ms to restartPrimaryControlPlane
	I1013 23:18:55.227808  635465 kubeadm.go:402] duration metric: took 174.531778ms to StartCluster
	I1013 23:18:55.227847  635465 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:55.227942  635465 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:55.229017  635465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:55.229307  635465 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:18:55.229824  635465 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:55.229834  635465 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:18:55.229919  635465 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-041709"
	I1013 23:18:55.229937  635465 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-041709"
	W1013 23:18:55.229950  635465 addons.go:247] addon storage-provisioner should already be in state true
	I1013 23:18:55.229972  635465 addons.go:69] Setting dashboard=true in profile "newest-cni-041709"
	I1013 23:18:55.230049  635465 addons.go:238] Setting addon dashboard=true in "newest-cni-041709"
	W1013 23:18:55.230070  635465 addons.go:247] addon dashboard should already be in state true
	I1013 23:18:55.230127  635465 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:18:55.230742  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.229975  635465 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:18:55.229981  635465 addons.go:69] Setting default-storageclass=true in profile "newest-cni-041709"
	I1013 23:18:55.231418  635465 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-041709"
	I1013 23:18:55.231623  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.231707  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.237715  635465 out.go:179] * Verifying Kubernetes components...
	I1013 23:18:55.241204  635465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:55.292390  635465 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:18:55.295919  635465 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:18:55.295946  635465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:18:55.296012  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:55.296181  635465 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 23:18:55.299151  635465 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 23:18:55.302027  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 23:18:55.302060  635465 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 23:18:55.302144  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:55.303396  635465 addons.go:238] Setting addon default-storageclass=true in "newest-cni-041709"
	W1013 23:18:55.303421  635465 addons.go:247] addon default-storageclass should already be in state true
	I1013 23:18:55.303445  635465 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:18:55.303884  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.347631  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:55.358489  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:55.365461  635465 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:18:55.365486  635465 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:18:55.365565  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:55.395199  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:55.630044  635465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:18:55.648454  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 23:18:55.648519  635465 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 23:18:55.651006  635465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:18:55.652631  635465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:18:55.684755  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 23:18:55.684821  635465 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 23:18:55.707356  635465 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:18:55.707479  635465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:18:55.767725  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 23:18:55.767791  635465 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 23:18:55.824694  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 23:18:55.824761  635465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 23:18:55.875821  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 23:18:55.875896  635465 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 23:18:55.908545  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 23:18:55.908611  635465 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 23:18:55.925007  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 23:18:55.925084  635465 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 23:18:55.943567  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 23:18:55.943637  635465 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 23:18:55.959202  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:18:55.959275  635465 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 23:18:55.973822  635465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:19:01.669714  635465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.018637686s)
	I1013 23:19:03.631413  635465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.978715869s)
	I1013 23:19:03.631471  635465 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.923948212s)
	I1013 23:19:03.631484  635465 api_server.go:72] duration metric: took 8.402112688s to wait for apiserver process to appear ...
	I1013 23:19:03.631490  635465 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:19:03.631506  635465 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:19:03.631861  635465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.657957301s)
	I1013 23:19:03.635115  635465 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-041709 addons enable metrics-server
	
	I1013 23:19:03.638132  635465 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1013 23:19:03.640502  635465 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 23:19:03.641600  635465 api_server.go:141] control plane version: v1.34.1
	I1013 23:19:03.641730  635465 addons.go:514] duration metric: took 8.411886745s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1013 23:19:03.641788  635465 api_server.go:131] duration metric: took 10.291808ms to wait for apiserver health ...
	I1013 23:19:03.641811  635465 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:19:03.646907  635465 system_pods.go:59] 8 kube-system pods found
	I1013 23:19:03.646949  635465 system_pods.go:61] "coredns-66bc5c9577-xj6dp" [f8aa8176-0559-438a-bb73-df95a9b5b826] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 23:19:03.646958  635465 system_pods.go:61] "etcd-newest-cni-041709" [2e1039ec-5511-4bc4-bb4e-331058716785] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:19:03.646964  635465 system_pods.go:61] "kindnet-x8mhj" [414b54bb-0026-41ac-96be-8dee1342b4eb] Running
	I1013 23:19:03.646972  635465 system_pods.go:61] "kube-apiserver-newest-cni-041709" [e9b71f4d-dcbb-41d1-a857-431101cc96c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:19:03.646978  635465 system_pods.go:61] "kube-controller-manager-newest-cni-041709" [1ccd495b-3870-48ce-8bc7-bc4fb413007f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:19:03.646984  635465 system_pods.go:61] "kube-proxy-9th9t" [36d1d7c2-c48c-4aeb-a4bc-86598239d36d] Running
	I1013 23:19:03.646991  635465 system_pods.go:61] "kube-scheduler-newest-cni-041709" [d633d6be-b266-423e-b273-f756f05c08ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:19:03.647004  635465 system_pods.go:61] "storage-provisioner" [641ababe-c476-4464-889e-314716244888] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 23:19:03.647013  635465 system_pods.go:74] duration metric: took 5.181501ms to wait for pod list to return data ...
	I1013 23:19:03.647024  635465 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:19:03.649942  635465 default_sa.go:45] found service account: "default"
	I1013 23:19:03.649969  635465 default_sa.go:55] duration metric: took 2.938961ms for default service account to be created ...
	I1013 23:19:03.649982  635465 kubeadm.go:586] duration metric: took 8.420609362s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 23:19:03.650001  635465 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:19:03.654997  635465 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:19:03.655029  635465 node_conditions.go:123] node cpu capacity is 2
	I1013 23:19:03.655042  635465 node_conditions.go:105] duration metric: took 5.03592ms to run NodePressure ...
	I1013 23:19:03.655108  635465 start.go:241] waiting for startup goroutines ...
	I1013 23:19:03.655123  635465 start.go:246] waiting for cluster config update ...
	I1013 23:19:03.655136  635465 start.go:255] writing updated cluster config ...
	I1013 23:19:03.655451  635465 ssh_runner.go:195] Run: rm -f paused
	I1013 23:19:03.727707  635465 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:19:03.731053  635465 out.go:179] * Done! kubectl is now configured to use "newest-cni-041709" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.813935668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.822447228Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=dcffea3f-4c52-44a5-99b5-3d13634f8c6f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.836779964Z" level=info msg="Running pod sandbox: kube-system/kindnet-x8mhj/POD" id=43d3c2fd-831b-480f-b5c7-60fabb342f26 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.837050802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.849007946Z" level=info msg="Ran pod sandbox 3a81d113009025c5919d22a5d2093079f8b3cb9f4ea92965f2b0851d7098f57e with infra container: kube-system/kube-proxy-9th9t/POD" id=dcffea3f-4c52-44a5-99b5-3d13634f8c6f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.850687552Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=463cb632-6403-496f-8824-2deef561fd49 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.85259185Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=43d3c2fd-831b-480f-b5c7-60fabb342f26 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.882850243Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b94365aa-521a-49c4-9803-5e24d1a64af3 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.885621789Z" level=info msg="Creating container: kube-system/kube-proxy-9th9t/kube-proxy" id=2610761b-eae7-4064-a2c2-2ac2a8947f9d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.891790216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.902651145Z" level=info msg="Ran pod sandbox 7b6160db668c60307004203cc4f0cf5cdcb3660960af0a611e5397ceb352a325 with infra container: kube-system/kindnet-x8mhj/POD" id=43d3c2fd-831b-480f-b5c7-60fabb342f26 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.907212695Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=12d334c0-a69d-4c6e-bc1d-35c0b51dacad name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.947989532Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6218e8eb-f431-4159-8e91-96e38a7d892d name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.956099492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.972937106Z" level=info msg="Creating container: kube-system/kindnet-x8mhj/kindnet-cni" id=f95351db-10a9-4fed-a311-cfc6b51fd479 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.973477698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.973862527Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.999164727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.037584393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.16722991Z" level=info msg="Created container 62e78df64abfea9ae04654333b1d6f6bfa0cb7ef2dcafd84f8068abb6e49bf7b: kube-system/kube-proxy-9th9t/kube-proxy" id=2610761b-eae7-4064-a2c2-2ac2a8947f9d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.17264874Z" level=info msg="Starting container: 62e78df64abfea9ae04654333b1d6f6bfa0cb7ef2dcafd84f8068abb6e49bf7b" id=bd95c722-6fad-4f95-a3d2-76343cc250f9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.173860433Z" level=info msg="Created container 13e7044da7f364b31d167202fbbdd375875538232ed14d416f92a00965983c0f: kube-system/kindnet-x8mhj/kindnet-cni" id=f95351db-10a9-4fed-a311-cfc6b51fd479 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.175588554Z" level=info msg="Starting container: 13e7044da7f364b31d167202fbbdd375875538232ed14d416f92a00965983c0f" id=b3f3ac56-5e49-4f94-b107-271068eddbce name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.18294026Z" level=info msg="Started container" PID=1064 containerID=62e78df64abfea9ae04654333b1d6f6bfa0cb7ef2dcafd84f8068abb6e49bf7b description=kube-system/kube-proxy-9th9t/kube-proxy id=bd95c722-6fad-4f95-a3d2-76343cc250f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a81d113009025c5919d22a5d2093079f8b3cb9f4ea92965f2b0851d7098f57e
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.197811472Z" level=info msg="Started container" PID=1068 containerID=13e7044da7f364b31d167202fbbdd375875538232ed14d416f92a00965983c0f description=kube-system/kindnet-x8mhj/kindnet-cni id=b3f3ac56-5e49-4f94-b107-271068eddbce name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b6160db668c60307004203cc4f0cf5cdcb3660960af0a611e5397ceb352a325
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	13e7044da7f36       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   7b6160db668c6       kindnet-x8mhj                               kube-system
	62e78df64abfe       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   3a81d11300902       kube-proxy-9th9t                            kube-system
	ff97608ecad86       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   724001f409807       kube-scheduler-newest-cni-041709            kube-system
	f585eee05e276       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   5161581e06395       kube-controller-manager-newest-cni-041709   kube-system
	44d53fd79812b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   ba988e755653f       etcd-newest-cni-041709                      kube-system
	d86542dd3227b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   dbe05ebb40886       kube-apiserver-newest-cni-041709            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-041709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-041709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=newest-cni-041709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_18_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:18:33 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-041709
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:19:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:19:01 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:19:01 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:19:01 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 13 Oct 2025 23:19:01 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-041709
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                0d9c1be1-5d17-406d-9cb7-8ce49d27cba4
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-041709                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-x8mhj                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-041709             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-041709    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-9th9t                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-041709             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Warning  CgroupV1                 40s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  39s (x8 over 40s)  kubelet          Node newest-cni-041709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s (x8 over 40s)  kubelet          Node newest-cni-041709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     39s (x8 over 40s)  kubelet          Node newest-cni-041709 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-041709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-041709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-041709 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-041709 event: Registered Node newest-cni-041709 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-041709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-041709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-041709 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-041709 event: Registered Node newest-cni-041709 in Controller
	
	
	==> dmesg <==
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	[Oct13 23:16] overlayfs: idmapped layers are currently not supported
	[ +36.293322] overlayfs: idmapped layers are currently not supported
	[Oct13 23:17] overlayfs: idmapped layers are currently not supported
	[Oct13 23:18] overlayfs: idmapped layers are currently not supported
	[ +26.588739] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [44d53fd79812b8d922ad72b9ac9d25226207caf5d09d5678a9138d64ac33674c] <==
	{"level":"warn","ts":"2025-10-13T23:18:58.707900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:58.823837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:58.848351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:58.909885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:58.949640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:58.982361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.019329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.073964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.102718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.163935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.185994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.211864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.237956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.290920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.335873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.376083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.405523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.464584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.554189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.639211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.654598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.690039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.722983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.747252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.836611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49672","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:19:07 up  3:01,  0 user,  load average: 4.77, 3.76, 2.92
	Linux newest-cni-041709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [13e7044da7f364b31d167202fbbdd375875538232ed14d416f92a00965983c0f] <==
	I1013 23:19:02.316164       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:19:02.316393       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 23:19:02.316506       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:19:02.316517       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:19:02.316531       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:19:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:19:02.511189       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:19:02.511231       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:19:02.511241       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:19:02.511406       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [d86542dd3227b9d6eee466227d0a22afeced1086ec52a4e64f20f7da5d9ce81e] <==
	I1013 23:19:01.413693       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 23:19:01.414460       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 23:19:01.415316       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 23:19:01.415453       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 23:19:01.415500       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 23:19:01.432889       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 23:19:01.434708       1 aggregator.go:171] initial CRD sync complete...
	I1013 23:19:01.434726       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 23:19:01.434733       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 23:19:01.434740       1 cache.go:39] Caches are synced for autoregister controller
	I1013 23:19:01.444792       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 23:19:01.468464       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:19:01.484916       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:19:01.627922       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1013 23:19:01.734328       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:19:01.862435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:19:02.814301       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 23:19:03.005151       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:19:03.173295       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:19:03.242199       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:19:03.429198       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.32.82"}
	I1013 23:19:03.459311       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.9.247"}
	I1013 23:19:05.723034       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:19:06.119318       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 23:19:06.168115       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f585eee05e276e63d2044fb2ed0672a9197d1aaaacfa329137bbffb7a6fe644d] <==
	I1013 23:19:05.707804       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 23:19:05.710209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:19:05.710294       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 23:19:05.710347       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 23:19:05.712192       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:19:05.712327       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 23:19:05.712386       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 23:19:05.713479       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:19:05.713869       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 23:19:05.715021       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 23:19:05.715126       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 23:19:05.715187       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 23:19:05.715221       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 23:19:05.715252       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 23:19:05.715259       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 23:19:05.716712       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 23:19:05.716801       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:19:05.720218       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:19:05.720784       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 23:19:05.724009       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 23:19:05.724162       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 23:19:05.724615       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-041709"
	I1013 23:19:05.724704       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 23:19:05.727501       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 23:19:05.729646       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [62e78df64abfea9ae04654333b1d6f6bfa0cb7ef2dcafd84f8068abb6e49bf7b] <==
	I1013 23:19:02.714067       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:19:02.856986       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:19:02.962495       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:19:02.962541       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 23:19:02.962629       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:19:03.423740       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:19:03.427332       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:19:03.476039       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:19:03.476370       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:19:03.476394       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:19:03.481746       1 config.go:200] "Starting service config controller"
	I1013 23:19:03.481772       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:19:03.481789       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:19:03.481793       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:19:03.481807       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:19:03.481811       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:19:03.486425       1 config.go:309] "Starting node config controller"
	I1013 23:19:03.486512       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:19:03.486544       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:19:03.582267       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:19:03.582303       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:19:03.582345       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ff97608ecad8678ef83e1cca9d995096860465b562e07d958b0f6db3f4e80297] <==
	I1013 23:18:56.516340       1 serving.go:386] Generated self-signed cert in-memory
	W1013 23:19:01.254842       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 23:19:01.254870       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 23:19:01.254880       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 23:19:01.254887       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 23:19:01.570084       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 23:19:01.570115       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:19:01.580910       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:19:01.580939       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:19:01.583909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:19:01.584237       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 23:19:01.681455       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:18:57 newest-cni-041709 kubelet[731]: E1013 23:18:57.979936     731 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-041709\" not found" node="newest-cni-041709"
	Oct 13 23:18:58 newest-cni-041709 kubelet[731]: E1013 23:18:58.629734     731 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-041709\" not found" node="newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.073506     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.440565     731 apiserver.go:52] "Watching apiserver"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.573652     731 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.574385     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/414b54bb-0026-41ac-96be-8dee1342b4eb-lib-modules\") pod \"kindnet-x8mhj\" (UID: \"414b54bb-0026-41ac-96be-8dee1342b4eb\") " pod="kube-system/kindnet-x8mhj"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.574424     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36d1d7c2-c48c-4aeb-a4bc-86598239d36d-xtables-lock\") pod \"kube-proxy-9th9t\" (UID: \"36d1d7c2-c48c-4aeb-a4bc-86598239d36d\") " pod="kube-system/kube-proxy-9th9t"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.574477     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/414b54bb-0026-41ac-96be-8dee1342b4eb-cni-cfg\") pod \"kindnet-x8mhj\" (UID: \"414b54bb-0026-41ac-96be-8dee1342b4eb\") " pod="kube-system/kindnet-x8mhj"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.574495     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/414b54bb-0026-41ac-96be-8dee1342b4eb-xtables-lock\") pod \"kindnet-x8mhj\" (UID: \"414b54bb-0026-41ac-96be-8dee1342b4eb\") " pod="kube-system/kindnet-x8mhj"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.574547     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36d1d7c2-c48c-4aeb-a4bc-86598239d36d-lib-modules\") pod \"kube-proxy-9th9t\" (UID: \"36d1d7c2-c48c-4aeb-a4bc-86598239d36d\") " pod="kube-system/kube-proxy-9th9t"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.622607     731 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.622706     731 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.622737     731 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.630071     731 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: E1013 23:19:01.643492     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-041709\" already exists" pod="kube-system/etcd-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.643539     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.738930     731 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: E1013 23:19:01.797893     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-041709\" already exists" pod="kube-system/kube-apiserver-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.797930     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: E1013 23:19:01.855569     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-041709\" already exists" pod="kube-system/kube-controller-manager-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.855613     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: E1013 23:19:01.898498     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-041709\" already exists" pod="kube-system/kube-scheduler-newest-cni-041709"
	Oct 13 23:19:04 newest-cni-041709 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:19:05 newest-cni-041709 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:19:05 newest-cni-041709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-041709 -n newest-cni-041709
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-041709 -n newest-cni-041709: exit status 2 (353.047924ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-041709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xj6dp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tx8z2 kubernetes-dashboard-855c9754f9-xmxsp
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-041709 describe pod coredns-66bc5c9577-xj6dp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tx8z2 kubernetes-dashboard-855c9754f9-xmxsp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-041709 describe pod coredns-66bc5c9577-xj6dp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tx8z2 kubernetes-dashboard-855c9754f9-xmxsp: exit status 1 (85.255887ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xj6dp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-tx8z2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xmxsp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-041709 describe pod coredns-66bc5c9577-xj6dp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tx8z2 kubernetes-dashboard-855c9754f9-xmxsp: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-041709
helpers_test.go:243: (dbg) docker inspect newest-cni-041709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd",
	        "Created": "2025-10-13T23:18:08.094436918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 635599,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:18:46.994437684Z",
	            "FinishedAt": "2025-10-13T23:18:45.926059869Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/hosts",
	        "LogPath": "/var/lib/docker/containers/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd/06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd-json.log",
	        "Name": "/newest-cni-041709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-041709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-041709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "06492791cd8f48ff33261ff73fda9af7dc2d3ccf1b9bd275d582d532b49036fd",
	                "LowerDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d2ee3f7d04149a6c96b485ff06e13a8222492de8e7b6885f2a1bc52e9af5fb7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-041709",
	                "Source": "/var/lib/docker/volumes/newest-cni-041709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-041709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-041709",
	                "name.minikube.sigs.k8s.io": "newest-cni-041709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f7c33feec07c4d3c86a89a089af3c910301800ce374427b728858f27ad99b92b",
	            "SandboxKey": "/var/run/docker/netns/f7c33feec07c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-041709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:ab:f2:b0:49:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3df7c953cf9f4e0e97cdf9e47b4f15792247e0d1f7edb011f023caaa15ec476f",
	                    "EndpointID": "68e6ca1ea11907448a1fbdc141d752ccb52016ca76d93deee98944a86641f5cf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-041709",
	                        "06492791cd8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-041709 -n newest-cni-041709
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-041709 -n newest-cni-041709: exit status 2 (339.57086ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-041709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-041709 logs -n 25: (1.042052799s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-985461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-505482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │                     │
	│ stop    │ -p embed-certs-505482 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-505482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:16 UTC │
	│ start   │ -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:16 UTC │ 13 Oct 25 23:17 UTC │
	│ image   │ no-preload-985461 image list --format=json                                                                                                                                                                                                    │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p no-preload-985461 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p disable-driver-mounts-320520                                                                                                                                                                                                               │ disable-driver-mounts-320520 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ start   │ -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:18 UTC │
	│ image   │ embed-certs-505482 image list --format=json                                                                                                                                                                                                   │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p embed-certs-505482 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:18 UTC │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ start   │ -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ addons  │ enable metrics-server -p newest-cni-041709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	│ stop    │ -p newest-cni-041709 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-041709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ start   │ -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-033746 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-033746 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │                     │
	│ image   │ newest-cni-041709 image list --format=json                                                                                                                                                                                                    │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ pause   │ -p newest-cni-041709 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:18:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:18:46.659625  635465 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:18:46.659759  635465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:18:46.659771  635465 out.go:374] Setting ErrFile to fd 2...
	I1013 23:18:46.659778  635465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:18:46.660033  635465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:18:46.660394  635465 out.go:368] Setting JSON to false
	I1013 23:18:46.661326  635465 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10863,"bootTime":1760386664,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:18:46.661395  635465 start.go:141] virtualization:  
	I1013 23:18:46.665006  635465 out.go:179] * [newest-cni-041709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:18:46.668932  635465 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:18:46.668996  635465 notify.go:220] Checking for updates...
	I1013 23:18:46.674873  635465 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:18:46.677956  635465 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:46.680967  635465 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:18:46.684029  635465 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:18:46.686838  635465 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:18:46.690305  635465 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:46.690950  635465 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:18:46.730001  635465 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:18:46.730151  635465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:18:46.812994  635465 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:18:46.793285508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:18:46.813104  635465 docker.go:318] overlay module found
	I1013 23:18:46.817039  635465 out.go:179] * Using the docker driver based on existing profile
	I1013 23:18:46.820013  635465 start.go:305] selected driver: docker
	I1013 23:18:46.820036  635465 start.go:925] validating driver "docker" against &{Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:18:46.820133  635465 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:18:46.820838  635465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:18:46.888064  635465 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:18:46.878929832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:18:46.888406  635465 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 23:18:46.888461  635465 cni.go:84] Creating CNI manager for ""
	I1013 23:18:46.888520  635465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:18:46.888562  635465 start.go:349] cluster config:
	{Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:18:46.892201  635465 out.go:179] * Starting "newest-cni-041709" primary control-plane node in "newest-cni-041709" cluster
	I1013 23:18:46.895353  635465 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:18:46.898524  635465 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:18:46.901626  635465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:18:46.901707  635465 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:18:46.901713  635465 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:18:46.901723  635465 cache.go:58] Caching tarball of preloaded images
	I1013 23:18:46.901838  635465 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:18:46.901850  635465 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:18:46.901994  635465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/config.json ...
	I1013 23:18:46.932671  635465 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:18:46.932697  635465 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:18:46.932710  635465 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:18:46.932738  635465 start.go:360] acquireMachinesLock for newest-cni-041709: {Name:mk550fb39e8064c08d6ccaf342c21fc53a30808d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:18:46.932799  635465 start.go:364] duration metric: took 35.913µs to acquireMachinesLock for "newest-cni-041709"
	I1013 23:18:46.932823  635465 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:18:46.932842  635465 fix.go:54] fixHost starting: 
	I1013 23:18:46.933108  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:46.956430  635465 fix.go:112] recreateIfNeeded on newest-cni-041709: state=Stopped err=<nil>
	W1013 23:18:46.956463  635465 fix.go:138] unexpected machine state, will restart: <nil>
	W1013 23:18:43.907614  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	W1013 23:18:46.408020  628422 node_ready.go:57] node "default-k8s-diff-port-033746" has "Ready":"False" status (will retry)
	I1013 23:18:46.908290  628422 node_ready.go:49] node "default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:46.908317  628422 node_ready.go:38] duration metric: took 40.004370462s for node "default-k8s-diff-port-033746" to be "Ready" ...
	I1013 23:18:46.908330  628422 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:18:46.908478  628422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:18:46.923250  628422 api_server.go:72] duration metric: took 41.358831182s to wait for apiserver process to appear ...
	I1013 23:18:46.923281  628422 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:18:46.923305  628422 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1013 23:18:46.935604  628422 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1013 23:18:46.936939  628422 api_server.go:141] control plane version: v1.34.1
	I1013 23:18:46.936961  628422 api_server.go:131] duration metric: took 13.673442ms to wait for apiserver health ...
	I1013 23:18:46.936970  628422 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:18:46.941226  628422 system_pods.go:59] 8 kube-system pods found
	I1013 23:18:46.941259  628422 system_pods.go:61] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:46.941266  628422 system_pods.go:61] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:46.941273  628422 system_pods.go:61] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:46.941278  628422 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:46.941283  628422 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:46.941287  628422 system_pods.go:61] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:46.941292  628422 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:46.941297  628422 system_pods.go:61] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:46.941305  628422 system_pods.go:74] duration metric: took 4.329029ms to wait for pod list to return data ...
	I1013 23:18:46.941312  628422 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:18:46.944488  628422 default_sa.go:45] found service account: "default"
	I1013 23:18:46.944516  628422 default_sa.go:55] duration metric: took 3.197368ms for default service account to be created ...
	I1013 23:18:46.944526  628422 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:18:46.950031  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:46.950073  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:46.950081  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:46.950087  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:46.950092  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:46.950097  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:46.950101  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:46.950106  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:46.950112  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:46.950139  628422 retry.go:31] will retry after 198.826906ms: missing components: kube-dns
	I1013 23:18:47.153958  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:47.153990  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:47.153998  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:47.154004  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:47.154008  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:47.154012  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:47.154017  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:47.154020  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:47.154026  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:47.154041  628422 retry.go:31] will retry after 287.091453ms: missing components: kube-dns
	I1013 23:18:47.445492  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:47.445522  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:18:47.445530  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:47.445537  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:47.445542  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:47.445546  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:47.445550  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:47.445554  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:47.445560  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:18:47.445574  628422 retry.go:31] will retry after 372.489262ms: missing components: kube-dns
	I1013 23:18:47.822900  628422 system_pods.go:86] 8 kube-system pods found
	I1013 23:18:47.822989  628422 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Running
	I1013 23:18:47.823015  628422 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running
	I1013 23:18:47.823037  628422 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:18:47.823064  628422 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running
	I1013 23:18:47.823169  628422 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running
	I1013 23:18:47.823209  628422 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:18:47.823238  628422 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running
	I1013 23:18:47.823260  628422 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Running
	I1013 23:18:47.823291  628422 system_pods.go:126] duration metric: took 878.758193ms to wait for k8s-apps to be running ...
	I1013 23:18:47.823314  628422 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:18:47.823387  628422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:18:47.840918  628422 system_svc.go:56] duration metric: took 17.596072ms WaitForService to wait for kubelet
	I1013 23:18:47.840951  628422 kubeadm.go:586] duration metric: took 42.276544463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:18:47.840971  628422 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:18:47.845175  628422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:18:47.845214  628422 node_conditions.go:123] node cpu capacity is 2
	I1013 23:18:47.845227  628422 node_conditions.go:105] duration metric: took 4.251164ms to run NodePressure ...
	I1013 23:18:47.845240  628422 start.go:241] waiting for startup goroutines ...
	I1013 23:18:47.845248  628422 start.go:246] waiting for cluster config update ...
	I1013 23:18:47.845259  628422 start.go:255] writing updated cluster config ...
	I1013 23:18:47.845569  628422 ssh_runner.go:195] Run: rm -f paused
	I1013 23:18:47.851978  628422 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:18:47.855576  628422 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qf4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.860797  628422 pod_ready.go:94] pod "coredns-66bc5c9577-qf4lq" is "Ready"
	I1013 23:18:47.860825  628422 pod_ready.go:86] duration metric: took 5.221377ms for pod "coredns-66bc5c9577-qf4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.863794  628422 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.869540  628422 pod_ready.go:94] pod "etcd-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:47.869581  628422 pod_ready.go:86] duration metric: took 5.758982ms for pod "etcd-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.872429  628422 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.877409  628422 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:47.877436  628422 pod_ready.go:86] duration metric: took 4.943697ms for pod "kube-apiserver-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:47.880160  628422 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:48.256569  628422 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:48.256600  628422 pod_ready.go:86] duration metric: took 376.414834ms for pod "kube-controller-manager-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:48.455997  628422 pod_ready.go:83] waiting for pod "kube-proxy-mxnv7" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:48.857167  628422 pod_ready.go:94] pod "kube-proxy-mxnv7" is "Ready"
	I1013 23:18:48.857199  628422 pod_ready.go:86] duration metric: took 401.173799ms for pod "kube-proxy-mxnv7" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:49.056104  628422 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:49.456155  628422 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-033746" is "Ready"
	I1013 23:18:49.456184  628422 pod_ready.go:86] duration metric: took 400.055996ms for pod "kube-scheduler-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:18:49.456198  628422 pod_ready.go:40] duration metric: took 1.604180795s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:18:49.517347  628422 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:18:49.520660  628422 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-033746" cluster and "default" namespace by default
	I1013 23:18:46.959948  635465 out.go:252] * Restarting existing docker container for "newest-cni-041709" ...
	I1013 23:18:46.960041  635465 cli_runner.go:164] Run: docker start newest-cni-041709
	I1013 23:18:47.272614  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:47.301351  635465 kic.go:430] container "newest-cni-041709" state is running.
	I1013 23:18:47.302387  635465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:47.336818  635465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/config.json ...
	I1013 23:18:47.337080  635465 machine.go:93] provisionDockerMachine start ...
	I1013 23:18:47.337156  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:47.361341  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:47.361673  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:47.361689  635465 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:18:47.362232  635465 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54710->127.0.0.1:33484: read: connection reset by peer
	I1013 23:18:50.506943  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-041709
	
	I1013 23:18:50.506971  635465 ubuntu.go:182] provisioning hostname "newest-cni-041709"
	I1013 23:18:50.507038  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:50.529919  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:50.530245  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:50.530263  635465 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-041709 && echo "newest-cni-041709" | sudo tee /etc/hostname
	I1013 23:18:50.684921  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-041709
	
	I1013 23:18:50.685001  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:50.704082  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:50.704403  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:50.704447  635465 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-041709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-041709/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-041709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:18:50.859361  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:18:50.859386  635465 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:18:50.859412  635465 ubuntu.go:190] setting up certificates
	I1013 23:18:50.859421  635465 provision.go:84] configureAuth start
	I1013 23:18:50.859479  635465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:50.876142  635465 provision.go:143] copyHostCerts
	I1013 23:18:50.876213  635465 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:18:50.876241  635465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:18:50.876320  635465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:18:50.876465  635465 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:18:50.876477  635465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:18:50.876508  635465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:18:50.876577  635465 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:18:50.876587  635465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:18:50.876613  635465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:18:50.876675  635465 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.newest-cni-041709 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-041709]
	I1013 23:18:51.531424  635465 provision.go:177] copyRemoteCerts
	I1013 23:18:51.531540  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:18:51.531626  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:51.550654  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:51.658836  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:18:51.677867  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 23:18:51.695565  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 23:18:51.718317  635465 provision.go:87] duration metric: took 858.871562ms to configureAuth
	I1013 23:18:51.718407  635465 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:18:51.718639  635465 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:51.718821  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:51.739263  635465 main.go:141] libmachine: Using SSH client type: native
	I1013 23:18:51.739570  635465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33484 <nil> <nil>}
	I1013 23:18:51.739597  635465 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:18:52.103542  635465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:18:52.103629  635465 machine.go:96] duration metric: took 4.766529819s to provisionDockerMachine
	I1013 23:18:52.103655  635465 start.go:293] postStartSetup for "newest-cni-041709" (driver="docker")
	I1013 23:18:52.103680  635465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:18:52.103769  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:18:52.103834  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.134085  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.243909  635465 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:18:52.248277  635465 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:18:52.248316  635465 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:18:52.248328  635465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:18:52.248403  635465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:18:52.248560  635465 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:18:52.248714  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:18:52.265882  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:18:52.295454  635465 start.go:296] duration metric: took 191.769349ms for postStartSetup
	I1013 23:18:52.295552  635465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:18:52.295635  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.314699  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.416219  635465 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:18:52.421219  635465 fix.go:56] duration metric: took 5.488375753s for fixHost
	I1013 23:18:52.421253  635465 start.go:83] releasing machines lock for "newest-cni-041709", held for 5.488442081s
	I1013 23:18:52.421386  635465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-041709
	I1013 23:18:52.447747  635465 ssh_runner.go:195] Run: cat /version.json
	I1013 23:18:52.447805  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.447830  635465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:18:52.447892  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:52.472897  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.484151  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:52.583128  635465 ssh_runner.go:195] Run: systemctl --version
	I1013 23:18:52.675301  635465 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:18:52.730783  635465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:18:52.736216  635465 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:18:52.736295  635465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:18:52.744350  635465 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:18:52.744460  635465 start.go:495] detecting cgroup driver to use...
	I1013 23:18:52.744522  635465 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:18:52.744593  635465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:18:52.760915  635465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:18:52.775406  635465 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:18:52.775473  635465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:18:52.791809  635465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:18:52.805702  635465 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:18:52.932808  635465 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:18:53.078511  635465 docker.go:234] disabling docker service ...
	I1013 23:18:53.078575  635465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:18:53.096051  635465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:18:53.111550  635465 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:18:53.239198  635465 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:18:53.365077  635465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:18:53.379354  635465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:18:53.393225  635465 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:18:53.393321  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.402665  635465 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:18:53.402754  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.412000  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.421032  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.430178  635465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:18:53.446415  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.461458  635465 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.471380  635465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:18:53.480834  635465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:18:53.488886  635465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:18:53.496978  635465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:53.632607  635465 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:18:53.798233  635465 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:18:53.798351  635465 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:18:53.802403  635465 start.go:563] Will wait 60s for crictl version
	I1013 23:18:53.802545  635465 ssh_runner.go:195] Run: which crictl
	I1013 23:18:53.806255  635465 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:18:53.831293  635465 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:18:53.831455  635465 ssh_runner.go:195] Run: crio --version
	I1013 23:18:53.861533  635465 ssh_runner.go:195] Run: crio --version
	I1013 23:18:53.892208  635465 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:18:53.894956  635465 cli_runner.go:164] Run: docker network inspect newest-cni-041709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:18:53.910736  635465 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 23:18:53.914876  635465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:18:53.929173  635465 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1013 23:18:53.931924  635465 kubeadm.go:883] updating cluster {Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:18:53.932080  635465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:18:53.932170  635465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:18:53.968015  635465 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:18:53.968040  635465 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:18:53.968096  635465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:18:53.995316  635465 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:18:53.995344  635465 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:18:53.995353  635465 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 23:18:53.995455  635465 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-041709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:18:53.995543  635465 ssh_runner.go:195] Run: crio config
	I1013 23:18:54.083016  635465 cni.go:84] Creating CNI manager for ""
	I1013 23:18:54.083173  635465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:18:54.083216  635465 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 23:18:54.083275  635465 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-041709 NodeName:newest-cni-041709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:18:54.083418  635465 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-041709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:18:54.083495  635465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:18:54.092035  635465 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:18:54.092108  635465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:18:54.100446  635465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 23:18:54.113935  635465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:18:54.128703  635465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1013 23:18:54.145028  635465 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:18:54.149031  635465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:18:54.159321  635465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:54.292995  635465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:18:54.310744  635465 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709 for IP: 192.168.76.2
	I1013 23:18:54.310802  635465 certs.go:195] generating shared ca certs ...
	I1013 23:18:54.310842  635465 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:54.311021  635465 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:18:54.311158  635465 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:18:54.311211  635465 certs.go:257] generating profile certs ...
	I1013 23:18:54.311334  635465 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/client.key
	I1013 23:18:54.311450  635465 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key.01857a96
	I1013 23:18:54.311534  635465 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.key
	I1013 23:18:54.311673  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:18:54.311741  635465 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:18:54.311778  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:18:54.311831  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:18:54.311886  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:18:54.311951  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:18:54.312039  635465 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:18:54.312871  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:18:54.332293  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:18:54.350249  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:18:54.367756  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:18:54.385471  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 23:18:54.403134  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 23:18:54.427326  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:18:54.452916  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/newest-cni-041709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 23:18:54.475220  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:18:54.503057  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:18:54.528203  635465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:18:54.549801  635465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:18:54.563631  635465 ssh_runner.go:195] Run: openssl version
	I1013 23:18:54.570495  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:18:54.579237  635465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:18:54.583560  635465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:18:54.583693  635465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:18:54.629672  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:18:54.637997  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:18:54.646667  635465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:54.651345  635465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:54.651410  635465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:18:54.694293  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:18:54.703284  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:18:54.712284  635465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:18:54.717383  635465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:18:54.717493  635465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:18:54.759219  635465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:18:54.767198  635465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:18:54.771070  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:18:54.812150  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:18:54.853770  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:18:54.895147  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:18:54.939997  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:18:54.993659  635465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:18:55.053286  635465 kubeadm.go:400] StartCluster: {Name:newest-cni-041709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-041709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:18:55.053441  635465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:18:55.053552  635465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:18:55.139374  635465 cri.go:89] found id: "44d53fd79812b8d922ad72b9ac9d25226207caf5d09d5678a9138d64ac33674c"
	I1013 23:18:55.139447  635465 cri.go:89] found id: "d86542dd3227b9d6eee466227d0a22afeced1086ec52a4e64f20f7da5d9ce81e"
	I1013 23:18:55.139465  635465 cri.go:89] found id: ""
	I1013 23:18:55.139546  635465 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 23:18:55.167760  635465 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:18:55Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:18:55.167914  635465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:18:55.188111  635465 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:18:55.188180  635465 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:18:55.188260  635465 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:18:55.208830  635465 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:18:55.209520  635465 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-041709" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:55.209858  635465 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-428797/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-041709" cluster setting kubeconfig missing "newest-cni-041709" context setting]
	I1013 23:18:55.210381  635465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:55.212378  635465 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:18:55.227701  635465 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1013 23:18:55.227783  635465 kubeadm.go:601] duration metric: took 39.582112ms to restartPrimaryControlPlane
	I1013 23:18:55.227808  635465 kubeadm.go:402] duration metric: took 174.531778ms to StartCluster
	I1013 23:18:55.227847  635465 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:55.227942  635465 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:18:55.229017  635465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:18:55.229307  635465 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:18:55.229824  635465 config.go:182] Loaded profile config "newest-cni-041709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:18:55.229834  635465 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:18:55.229919  635465 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-041709"
	I1013 23:18:55.229937  635465 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-041709"
	W1013 23:18:55.229950  635465 addons.go:247] addon storage-provisioner should already be in state true
	I1013 23:18:55.229972  635465 addons.go:69] Setting dashboard=true in profile "newest-cni-041709"
	I1013 23:18:55.230049  635465 addons.go:238] Setting addon dashboard=true in "newest-cni-041709"
	W1013 23:18:55.230070  635465 addons.go:247] addon dashboard should already be in state true
	I1013 23:18:55.230127  635465 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:18:55.230742  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.229975  635465 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:18:55.229981  635465 addons.go:69] Setting default-storageclass=true in profile "newest-cni-041709"
	I1013 23:18:55.231418  635465 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-041709"
	I1013 23:18:55.231623  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.231707  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.237715  635465 out.go:179] * Verifying Kubernetes components...
	I1013 23:18:55.241204  635465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:18:55.292390  635465 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:18:55.295919  635465 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:18:55.295946  635465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:18:55.296012  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:55.296181  635465 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 23:18:55.299151  635465 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 23:18:55.302027  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 23:18:55.302060  635465 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 23:18:55.302144  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:55.303396  635465 addons.go:238] Setting addon default-storageclass=true in "newest-cni-041709"
	W1013 23:18:55.303421  635465 addons.go:247] addon default-storageclass should already be in state true
	I1013 23:18:55.303445  635465 host.go:66] Checking if "newest-cni-041709" exists ...
	I1013 23:18:55.303884  635465 cli_runner.go:164] Run: docker container inspect newest-cni-041709 --format={{.State.Status}}
	I1013 23:18:55.347631  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:55.358489  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:55.365461  635465 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:18:55.365486  635465 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:18:55.365565  635465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-041709
	I1013 23:18:55.395199  635465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/newest-cni-041709/id_rsa Username:docker}
	I1013 23:18:55.630044  635465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:18:55.648454  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 23:18:55.648519  635465 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 23:18:55.651006  635465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:18:55.652631  635465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:18:55.684755  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 23:18:55.684821  635465 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 23:18:55.707356  635465 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:18:55.707479  635465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:18:55.767725  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 23:18:55.767791  635465 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 23:18:55.824694  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 23:18:55.824761  635465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 23:18:55.875821  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 23:18:55.875896  635465 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 23:18:55.908545  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 23:18:55.908611  635465 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 23:18:55.925007  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 23:18:55.925084  635465 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 23:18:55.943567  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 23:18:55.943637  635465 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 23:18:55.959202  635465 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:18:55.959275  635465 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 23:18:55.973822  635465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:19:01.669714  635465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.018637686s)
	I1013 23:19:03.631413  635465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.978715869s)
	I1013 23:19:03.631471  635465 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.923948212s)
	I1013 23:19:03.631484  635465 api_server.go:72] duration metric: took 8.402112688s to wait for apiserver process to appear ...
	I1013 23:19:03.631490  635465 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:19:03.631506  635465 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:19:03.631861  635465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.657957301s)
	I1013 23:19:03.635115  635465 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-041709 addons enable metrics-server
	
	I1013 23:19:03.638132  635465 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1013 23:19:03.640502  635465 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 23:19:03.641600  635465 api_server.go:141] control plane version: v1.34.1
	I1013 23:19:03.641730  635465 addons.go:514] duration metric: took 8.411886745s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1013 23:19:03.641788  635465 api_server.go:131] duration metric: took 10.291808ms to wait for apiserver health ...
	I1013 23:19:03.641811  635465 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:19:03.646907  635465 system_pods.go:59] 8 kube-system pods found
	I1013 23:19:03.646949  635465 system_pods.go:61] "coredns-66bc5c9577-xj6dp" [f8aa8176-0559-438a-bb73-df95a9b5b826] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 23:19:03.646958  635465 system_pods.go:61] "etcd-newest-cni-041709" [2e1039ec-5511-4bc4-bb4e-331058716785] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:19:03.646964  635465 system_pods.go:61] "kindnet-x8mhj" [414b54bb-0026-41ac-96be-8dee1342b4eb] Running
	I1013 23:19:03.646972  635465 system_pods.go:61] "kube-apiserver-newest-cni-041709" [e9b71f4d-dcbb-41d1-a857-431101cc96c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:19:03.646978  635465 system_pods.go:61] "kube-controller-manager-newest-cni-041709" [1ccd495b-3870-48ce-8bc7-bc4fb413007f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:19:03.646984  635465 system_pods.go:61] "kube-proxy-9th9t" [36d1d7c2-c48c-4aeb-a4bc-86598239d36d] Running
	I1013 23:19:03.646991  635465 system_pods.go:61] "kube-scheduler-newest-cni-041709" [d633d6be-b266-423e-b273-f756f05c08ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:19:03.647004  635465 system_pods.go:61] "storage-provisioner" [641ababe-c476-4464-889e-314716244888] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 23:19:03.647013  635465 system_pods.go:74] duration metric: took 5.181501ms to wait for pod list to return data ...
	I1013 23:19:03.647024  635465 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:19:03.649942  635465 default_sa.go:45] found service account: "default"
	I1013 23:19:03.649969  635465 default_sa.go:55] duration metric: took 2.938961ms for default service account to be created ...
	I1013 23:19:03.649982  635465 kubeadm.go:586] duration metric: took 8.420609362s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 23:19:03.650001  635465 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:19:03.654997  635465 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:19:03.655029  635465 node_conditions.go:123] node cpu capacity is 2
	I1013 23:19:03.655042  635465 node_conditions.go:105] duration metric: took 5.03592ms to run NodePressure ...
	I1013 23:19:03.655108  635465 start.go:241] waiting for startup goroutines ...
	I1013 23:19:03.655123  635465 start.go:246] waiting for cluster config update ...
	I1013 23:19:03.655136  635465 start.go:255] writing updated cluster config ...
	I1013 23:19:03.655451  635465 ssh_runner.go:195] Run: rm -f paused
	I1013 23:19:03.727707  635465 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:19:03.731053  635465 out.go:179] * Done! kubectl is now configured to use "newest-cni-041709" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.813935668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.822447228Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=dcffea3f-4c52-44a5-99b5-3d13634f8c6f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.836779964Z" level=info msg="Running pod sandbox: kube-system/kindnet-x8mhj/POD" id=43d3c2fd-831b-480f-b5c7-60fabb342f26 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.837050802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.849007946Z" level=info msg="Ran pod sandbox 3a81d113009025c5919d22a5d2093079f8b3cb9f4ea92965f2b0851d7098f57e with infra container: kube-system/kube-proxy-9th9t/POD" id=dcffea3f-4c52-44a5-99b5-3d13634f8c6f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.850687552Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=463cb632-6403-496f-8824-2deef561fd49 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.85259185Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=43d3c2fd-831b-480f-b5c7-60fabb342f26 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.882850243Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b94365aa-521a-49c4-9803-5e24d1a64af3 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.885621789Z" level=info msg="Creating container: kube-system/kube-proxy-9th9t/kube-proxy" id=2610761b-eae7-4064-a2c2-2ac2a8947f9d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.891790216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.902651145Z" level=info msg="Ran pod sandbox 7b6160db668c60307004203cc4f0cf5cdcb3660960af0a611e5397ceb352a325 with infra container: kube-system/kindnet-x8mhj/POD" id=43d3c2fd-831b-480f-b5c7-60fabb342f26 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.907212695Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=12d334c0-a69d-4c6e-bc1d-35c0b51dacad name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.947989532Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6218e8eb-f431-4159-8e91-96e38a7d892d name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.956099492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.972937106Z" level=info msg="Creating container: kube-system/kindnet-x8mhj/kindnet-cni" id=f95351db-10a9-4fed-a311-cfc6b51fd479 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.973477698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.973862527Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:01 newest-cni-041709 crio[613]: time="2025-10-13T23:19:01.999164727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.037584393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.16722991Z" level=info msg="Created container 62e78df64abfea9ae04654333b1d6f6bfa0cb7ef2dcafd84f8068abb6e49bf7b: kube-system/kube-proxy-9th9t/kube-proxy" id=2610761b-eae7-4064-a2c2-2ac2a8947f9d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.17264874Z" level=info msg="Starting container: 62e78df64abfea9ae04654333b1d6f6bfa0cb7ef2dcafd84f8068abb6e49bf7b" id=bd95c722-6fad-4f95-a3d2-76343cc250f9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.173860433Z" level=info msg="Created container 13e7044da7f364b31d167202fbbdd375875538232ed14d416f92a00965983c0f: kube-system/kindnet-x8mhj/kindnet-cni" id=f95351db-10a9-4fed-a311-cfc6b51fd479 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.175588554Z" level=info msg="Starting container: 13e7044da7f364b31d167202fbbdd375875538232ed14d416f92a00965983c0f" id=b3f3ac56-5e49-4f94-b107-271068eddbce name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.18294026Z" level=info msg="Started container" PID=1064 containerID=62e78df64abfea9ae04654333b1d6f6bfa0cb7ef2dcafd84f8068abb6e49bf7b description=kube-system/kube-proxy-9th9t/kube-proxy id=bd95c722-6fad-4f95-a3d2-76343cc250f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a81d113009025c5919d22a5d2093079f8b3cb9f4ea92965f2b0851d7098f57e
	Oct 13 23:19:02 newest-cni-041709 crio[613]: time="2025-10-13T23:19:02.197811472Z" level=info msg="Started container" PID=1068 containerID=13e7044da7f364b31d167202fbbdd375875538232ed14d416f92a00965983c0f description=kube-system/kindnet-x8mhj/kindnet-cni id=b3f3ac56-5e49-4f94-b107-271068eddbce name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b6160db668c60307004203cc4f0cf5cdcb3660960af0a611e5397ceb352a325
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	13e7044da7f36       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   7b6160db668c6       kindnet-x8mhj                               kube-system
	62e78df64abfe       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   3a81d11300902       kube-proxy-9th9t                            kube-system
	ff97608ecad86       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   724001f409807       kube-scheduler-newest-cni-041709            kube-system
	f585eee05e276       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   5161581e06395       kube-controller-manager-newest-cni-041709   kube-system
	44d53fd79812b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   ba988e755653f       etcd-newest-cni-041709                      kube-system
	d86542dd3227b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   dbe05ebb40886       kube-apiserver-newest-cni-041709            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-041709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-041709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=newest-cni-041709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_18_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:18:33 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-041709
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:19:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:19:01 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:19:01 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:19:01 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 13 Oct 2025 23:19:01 +0000   Mon, 13 Oct 2025 23:18:28 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-041709
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                0d9c1be1-5d17-406d-9cb7-8ce49d27cba4
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-041709                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-x8mhj                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-041709             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-041709    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-9th9t                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-041709             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Warning  CgroupV1                 42s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  41s (x8 over 42s)  kubelet          Node newest-cni-041709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    41s (x8 over 42s)  kubelet          Node newest-cni-041709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     41s (x8 over 42s)  kubelet          Node newest-cni-041709 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-041709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-041709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-041709 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-041709 event: Registered Node newest-cni-041709 in Controller
	  Normal   Starting                 15s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-041709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-041709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-041709 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-041709 event: Registered Node newest-cni-041709 in Controller
	
	
	==> dmesg <==
	[ +22.691175] overlayfs: idmapped layers are currently not supported
	[  +5.227604] overlayfs: idmapped layers are currently not supported
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	[Oct13 23:16] overlayfs: idmapped layers are currently not supported
	[ +36.293322] overlayfs: idmapped layers are currently not supported
	[Oct13 23:17] overlayfs: idmapped layers are currently not supported
	[Oct13 23:18] overlayfs: idmapped layers are currently not supported
	[ +26.588739] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [44d53fd79812b8d922ad72b9ac9d25226207caf5d09d5678a9138d64ac33674c] <==
	{"level":"warn","ts":"2025-10-13T23:18:58.707900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:58.823837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:58.848351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:58.909885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:58.949640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:58.982361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.019329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.073964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.102718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.163935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.185994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.211864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.237956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.290920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.335873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.376083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.405523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.464584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.554189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.639211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.654598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.690039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.722983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.747252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:18:59.836611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49672","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:19:09 up  3:01,  0 user,  load average: 4.55, 3.73, 2.92
	Linux newest-cni-041709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [13e7044da7f364b31d167202fbbdd375875538232ed14d416f92a00965983c0f] <==
	I1013 23:19:02.316164       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 23:19:02.316393       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 23:19:02.316506       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:19:02.316517       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:19:02.316531       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:19:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:19:02.511189       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:19:02.511231       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:19:02.511241       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:19:02.511406       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [d86542dd3227b9d6eee466227d0a22afeced1086ec52a4e64f20f7da5d9ce81e] <==
	I1013 23:19:01.413693       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 23:19:01.414460       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 23:19:01.415316       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 23:19:01.415453       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 23:19:01.415500       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 23:19:01.432889       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 23:19:01.434708       1 aggregator.go:171] initial CRD sync complete...
	I1013 23:19:01.434726       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 23:19:01.434733       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 23:19:01.434740       1 cache.go:39] Caches are synced for autoregister controller
	I1013 23:19:01.444792       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 23:19:01.468464       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:19:01.484916       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:19:01.627922       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1013 23:19:01.734328       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:19:01.862435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:19:02.814301       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 23:19:03.005151       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:19:03.173295       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:19:03.242199       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:19:03.429198       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.32.82"}
	I1013 23:19:03.459311       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.9.247"}
	I1013 23:19:05.723034       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:19:06.119318       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 23:19:06.168115       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [f585eee05e276e63d2044fb2ed0672a9197d1aaaacfa329137bbffb7a6fe644d] <==
	I1013 23:19:05.707804       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 23:19:05.710209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:19:05.710294       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 23:19:05.710347       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 23:19:05.712192       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:19:05.712327       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 23:19:05.712386       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 23:19:05.713479       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:19:05.713869       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 23:19:05.715021       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 23:19:05.715126       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 23:19:05.715187       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 23:19:05.715221       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 23:19:05.715252       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 23:19:05.715259       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 23:19:05.716712       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 23:19:05.716801       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:19:05.720218       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:19:05.720784       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 23:19:05.724009       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 23:19:05.724162       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 23:19:05.724615       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-041709"
	I1013 23:19:05.724704       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 23:19:05.727501       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 23:19:05.729646       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [62e78df64abfea9ae04654333b1d6f6bfa0cb7ef2dcafd84f8068abb6e49bf7b] <==
	I1013 23:19:02.714067       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:19:02.856986       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:19:02.962495       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:19:02.962541       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 23:19:02.962629       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:19:03.423740       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:19:03.427332       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:19:03.476039       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:19:03.476370       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:19:03.476394       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:19:03.481746       1 config.go:200] "Starting service config controller"
	I1013 23:19:03.481772       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:19:03.481789       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:19:03.481793       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:19:03.481807       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:19:03.481811       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:19:03.486425       1 config.go:309] "Starting node config controller"
	I1013 23:19:03.486512       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:19:03.486544       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:19:03.582267       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:19:03.582303       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:19:03.582345       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ff97608ecad8678ef83e1cca9d995096860465b562e07d958b0f6db3f4e80297] <==
	I1013 23:18:56.516340       1 serving.go:386] Generated self-signed cert in-memory
	W1013 23:19:01.254842       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 23:19:01.254870       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 23:19:01.254880       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 23:19:01.254887       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 23:19:01.570084       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 23:19:01.570115       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:19:01.580910       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:19:01.580939       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:19:01.583909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:19:01.584237       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 23:19:01.681455       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:18:57 newest-cni-041709 kubelet[731]: E1013 23:18:57.979936     731 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-041709\" not found" node="newest-cni-041709"
	Oct 13 23:18:58 newest-cni-041709 kubelet[731]: E1013 23:18:58.629734     731 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-041709\" not found" node="newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.073506     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.440565     731 apiserver.go:52] "Watching apiserver"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.573652     731 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.574385     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/414b54bb-0026-41ac-96be-8dee1342b4eb-lib-modules\") pod \"kindnet-x8mhj\" (UID: \"414b54bb-0026-41ac-96be-8dee1342b4eb\") " pod="kube-system/kindnet-x8mhj"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.574424     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36d1d7c2-c48c-4aeb-a4bc-86598239d36d-xtables-lock\") pod \"kube-proxy-9th9t\" (UID: \"36d1d7c2-c48c-4aeb-a4bc-86598239d36d\") " pod="kube-system/kube-proxy-9th9t"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.574477     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/414b54bb-0026-41ac-96be-8dee1342b4eb-cni-cfg\") pod \"kindnet-x8mhj\" (UID: \"414b54bb-0026-41ac-96be-8dee1342b4eb\") " pod="kube-system/kindnet-x8mhj"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.574495     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/414b54bb-0026-41ac-96be-8dee1342b4eb-xtables-lock\") pod \"kindnet-x8mhj\" (UID: \"414b54bb-0026-41ac-96be-8dee1342b4eb\") " pod="kube-system/kindnet-x8mhj"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.574547     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36d1d7c2-c48c-4aeb-a4bc-86598239d36d-lib-modules\") pod \"kube-proxy-9th9t\" (UID: \"36d1d7c2-c48c-4aeb-a4bc-86598239d36d\") " pod="kube-system/kube-proxy-9th9t"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.622607     731 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.622706     731 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.622737     731 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.630071     731 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: E1013 23:19:01.643492     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-041709\" already exists" pod="kube-system/etcd-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.643539     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.738930     731 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: E1013 23:19:01.797893     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-041709\" already exists" pod="kube-system/kube-apiserver-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.797930     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: E1013 23:19:01.855569     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-041709\" already exists" pod="kube-system/kube-controller-manager-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: I1013 23:19:01.855613     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-041709"
	Oct 13 23:19:01 newest-cni-041709 kubelet[731]: E1013 23:19:01.898498     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-041709\" already exists" pod="kube-system/kube-scheduler-newest-cni-041709"
	Oct 13 23:19:04 newest-cni-041709 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:19:05 newest-cni-041709 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:19:05 newest-cni-041709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-041709 -n newest-cni-041709
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-041709 -n newest-cni-041709: exit status 2 (353.391565ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-041709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xj6dp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tx8z2 kubernetes-dashboard-855c9754f9-xmxsp
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-041709 describe pod coredns-66bc5c9577-xj6dp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tx8z2 kubernetes-dashboard-855c9754f9-xmxsp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-041709 describe pod coredns-66bc5c9577-xj6dp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tx8z2 kubernetes-dashboard-855c9754f9-xmxsp: exit status 1 (88.679811ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xj6dp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-tx8z2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xmxsp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-041709 describe pod coredns-66bc5c9577-xj6dp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-tx8z2 kubernetes-dashboard-855c9754f9-xmxsp: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-033746 --alsologtostderr -v=1
E1013 23:20:46.228382  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:20:47.509701  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-033746 --alsologtostderr -v=1: exit status 80 (2.246741512s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-033746 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 23:20:45.844098  644395 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:20:45.844237  644395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:20:45.844249  644395 out.go:374] Setting ErrFile to fd 2...
	I1013 23:20:45.844263  644395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:20:45.844572  644395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:20:45.844864  644395 out.go:368] Setting JSON to false
	I1013 23:20:45.844892  644395 mustload.go:65] Loading cluster: default-k8s-diff-port-033746
	I1013 23:20:45.845274  644395 config.go:182] Loaded profile config "default-k8s-diff-port-033746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:20:45.845734  644395 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:20:45.863038  644395 host.go:66] Checking if "default-k8s-diff-port-033746" exists ...
	I1013 23:20:45.863443  644395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:20:45.936689  644395 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 23:20:45.926519333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:20:45.937401  644395 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-033746 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 23:20:45.940998  644395 out.go:179] * Pausing node default-k8s-diff-port-033746 ... 
	I1013 23:20:45.943828  644395 host.go:66] Checking if "default-k8s-diff-port-033746" exists ...
	I1013 23:20:45.944193  644395 ssh_runner.go:195] Run: systemctl --version
	I1013 23:20:45.944255  644395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:20:45.961526  644395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:20:46.065736  644395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:20:46.080702  644395 pause.go:52] kubelet running: true
	I1013 23:20:46.080768  644395 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:20:46.327224  644395 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:20:46.327384  644395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:20:46.415142  644395 cri.go:89] found id: "5c555d4efff48ea336088981b3246ac8d7f5cb5d4c6d286df5c7bd6fba460d35"
	I1013 23:20:46.415168  644395 cri.go:89] found id: "07ae824f8dd13988631a49a5321f83059aa5d43e097358a27639066d210ec4c1"
	I1013 23:20:46.415183  644395 cri.go:89] found id: "2744854e183c6d04900672bd669f244f681e44096d61da7ce2a00ed165ae9394"
	I1013 23:20:46.415188  644395 cri.go:89] found id: "f817315f7da05cc291c73bfaf16bad680cb70bb5ff043f18fa59f7ada7fb3215"
	I1013 23:20:46.415191  644395 cri.go:89] found id: "627054f4b8711bf5c68f79b3ba67430e516c8873d1bc2dac09c6d20b34208388"
	I1013 23:20:46.415195  644395 cri.go:89] found id: "066ad3d69ea84808c078d93b1f6265cfd21518d17a5db054d1b69f87ca56e952"
	I1013 23:20:46.415198  644395 cri.go:89] found id: "cab38f78f2c2f085857d4a3efa0373a4a503447eebfd8334b6524ca0ec415a07"
	I1013 23:20:46.415202  644395 cri.go:89] found id: "3f7f4bc1a19c7b8ca9e580a8effb1d745cb76de4a5ab7542321977f3bf56b636"
	I1013 23:20:46.415206  644395 cri.go:89] found id: "4e7274aa9666913e174875ca758f5279a206c60e735c947c6cd3cf7e67e99d2b"
	I1013 23:20:46.415224  644395 cri.go:89] found id: "224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a"
	I1013 23:20:46.415234  644395 cri.go:89] found id: "56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b"
	I1013 23:20:46.415237  644395 cri.go:89] found id: ""
	I1013 23:20:46.415293  644395 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:20:46.426590  644395 retry.go:31] will retry after 181.006039ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:20:46Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:20:46.608057  644395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:20:46.622721  644395 pause.go:52] kubelet running: false
	I1013 23:20:46.622787  644395 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:20:46.798655  644395 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:20:46.798732  644395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:20:46.873489  644395 cri.go:89] found id: "5c555d4efff48ea336088981b3246ac8d7f5cb5d4c6d286df5c7bd6fba460d35"
	I1013 23:20:46.873514  644395 cri.go:89] found id: "07ae824f8dd13988631a49a5321f83059aa5d43e097358a27639066d210ec4c1"
	I1013 23:20:46.873520  644395 cri.go:89] found id: "2744854e183c6d04900672bd669f244f681e44096d61da7ce2a00ed165ae9394"
	I1013 23:20:46.873524  644395 cri.go:89] found id: "f817315f7da05cc291c73bfaf16bad680cb70bb5ff043f18fa59f7ada7fb3215"
	I1013 23:20:46.873527  644395 cri.go:89] found id: "627054f4b8711bf5c68f79b3ba67430e516c8873d1bc2dac09c6d20b34208388"
	I1013 23:20:46.873531  644395 cri.go:89] found id: "066ad3d69ea84808c078d93b1f6265cfd21518d17a5db054d1b69f87ca56e952"
	I1013 23:20:46.873534  644395 cri.go:89] found id: "cab38f78f2c2f085857d4a3efa0373a4a503447eebfd8334b6524ca0ec415a07"
	I1013 23:20:46.873537  644395 cri.go:89] found id: "3f7f4bc1a19c7b8ca9e580a8effb1d745cb76de4a5ab7542321977f3bf56b636"
	I1013 23:20:46.873540  644395 cri.go:89] found id: "4e7274aa9666913e174875ca758f5279a206c60e735c947c6cd3cf7e67e99d2b"
	I1013 23:20:46.873546  644395 cri.go:89] found id: "224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a"
	I1013 23:20:46.873550  644395 cri.go:89] found id: "56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b"
	I1013 23:20:46.873553  644395 cri.go:89] found id: ""
	I1013 23:20:46.873608  644395 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:20:46.885743  644395 retry.go:31] will retry after 283.233251ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:20:46Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:20:47.169198  644395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:20:47.183277  644395 pause.go:52] kubelet running: false
	I1013 23:20:47.183370  644395 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:20:47.371829  644395 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:20:47.371950  644395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:20:47.450210  644395 cri.go:89] found id: "5c555d4efff48ea336088981b3246ac8d7f5cb5d4c6d286df5c7bd6fba460d35"
	I1013 23:20:47.450234  644395 cri.go:89] found id: "07ae824f8dd13988631a49a5321f83059aa5d43e097358a27639066d210ec4c1"
	I1013 23:20:47.450239  644395 cri.go:89] found id: "2744854e183c6d04900672bd669f244f681e44096d61da7ce2a00ed165ae9394"
	I1013 23:20:47.450244  644395 cri.go:89] found id: "f817315f7da05cc291c73bfaf16bad680cb70bb5ff043f18fa59f7ada7fb3215"
	I1013 23:20:47.450247  644395 cri.go:89] found id: "627054f4b8711bf5c68f79b3ba67430e516c8873d1bc2dac09c6d20b34208388"
	I1013 23:20:47.450251  644395 cri.go:89] found id: "066ad3d69ea84808c078d93b1f6265cfd21518d17a5db054d1b69f87ca56e952"
	I1013 23:20:47.450254  644395 cri.go:89] found id: "cab38f78f2c2f085857d4a3efa0373a4a503447eebfd8334b6524ca0ec415a07"
	I1013 23:20:47.450257  644395 cri.go:89] found id: "3f7f4bc1a19c7b8ca9e580a8effb1d745cb76de4a5ab7542321977f3bf56b636"
	I1013 23:20:47.450293  644395 cri.go:89] found id: "4e7274aa9666913e174875ca758f5279a206c60e735c947c6cd3cf7e67e99d2b"
	I1013 23:20:47.450307  644395 cri.go:89] found id: "224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a"
	I1013 23:20:47.450311  644395 cri.go:89] found id: "56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b"
	I1013 23:20:47.450314  644395 cri.go:89] found id: ""
	I1013 23:20:47.450380  644395 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:20:47.461421  644395 retry.go:31] will retry after 282.347067ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:20:47Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:20:47.744816  644395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:20:47.758760  644395 pause.go:52] kubelet running: false
	I1013 23:20:47.758858  644395 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 23:20:47.932655  644395 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 23:20:47.932734  644395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 23:20:48.002624  644395 cri.go:89] found id: "5c555d4efff48ea336088981b3246ac8d7f5cb5d4c6d286df5c7bd6fba460d35"
	I1013 23:20:48.002692  644395 cri.go:89] found id: "07ae824f8dd13988631a49a5321f83059aa5d43e097358a27639066d210ec4c1"
	I1013 23:20:48.002711  644395 cri.go:89] found id: "2744854e183c6d04900672bd669f244f681e44096d61da7ce2a00ed165ae9394"
	I1013 23:20:48.002732  644395 cri.go:89] found id: "f817315f7da05cc291c73bfaf16bad680cb70bb5ff043f18fa59f7ada7fb3215"
	I1013 23:20:48.002751  644395 cri.go:89] found id: "627054f4b8711bf5c68f79b3ba67430e516c8873d1bc2dac09c6d20b34208388"
	I1013 23:20:48.002773  644395 cri.go:89] found id: "066ad3d69ea84808c078d93b1f6265cfd21518d17a5db054d1b69f87ca56e952"
	I1013 23:20:48.002800  644395 cri.go:89] found id: "cab38f78f2c2f085857d4a3efa0373a4a503447eebfd8334b6524ca0ec415a07"
	I1013 23:20:48.002832  644395 cri.go:89] found id: "3f7f4bc1a19c7b8ca9e580a8effb1d745cb76de4a5ab7542321977f3bf56b636"
	I1013 23:20:48.002853  644395 cri.go:89] found id: "4e7274aa9666913e174875ca758f5279a206c60e735c947c6cd3cf7e67e99d2b"
	I1013 23:20:48.002899  644395 cri.go:89] found id: "224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a"
	I1013 23:20:48.002917  644395 cri.go:89] found id: "56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b"
	I1013 23:20:48.002934  644395 cri.go:89] found id: ""
	I1013 23:20:48.003009  644395 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 23:20:48.020929  644395 out.go:203] 
	W1013 23:20:48.023853  644395 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:20:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:20:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 23:20:48.023880  644395 out.go:285] * 
	* 
	W1013 23:20:48.031241  644395 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 23:20:48.034158  644395 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-033746 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-033746
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-033746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090",
	        "Created": "2025-10-13T23:17:28.705422027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 639914,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:19:17.508813231Z",
	            "FinishedAt": "2025-10-13T23:19:14.635369501Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/hostname",
	        "HostsPath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/hosts",
	        "LogPath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090-json.log",
	        "Name": "/default-k8s-diff-port-033746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-033746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-033746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090",
	                "LowerDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-033746",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-033746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-033746",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-033746",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-033746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "370dba46f787089ceb1fb58dbb2fafcff0981c8936e651811a17b4056269b265",
	            "SandboxKey": "/var/run/docker/netns/370dba46f787",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-033746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:2a:ca:aa:fd:7b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8549f6a07be41a945dcb145bb71d1b75a39e75ddc68f75d19380e8800e056e42",
	                    "EndpointID": "7122e7e7ee671940dd54a8f5f6b6d601a2a1b1e3d09a723ff675629ccc79bc22",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-033746",
	                        "278dbdd59e84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746: exit status 2 (378.307151ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-033746 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-033746 logs -n 25: (1.430893311s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p disable-driver-mounts-320520                                                                                                                                                                                                               │ disable-driver-mounts-320520 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ start   │ -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:18 UTC │
	│ image   │ embed-certs-505482 image list --format=json                                                                                                                                                                                                   │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p embed-certs-505482 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:18 UTC │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ start   │ -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ addons  │ enable metrics-server -p newest-cni-041709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	│ stop    │ -p newest-cni-041709 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-041709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ start   │ -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-033746 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-033746 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ image   │ newest-cni-041709 image list --format=json                                                                                                                                                                                                    │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ pause   │ -p newest-cni-041709 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │                     │
	│ delete  │ -p newest-cni-041709                                                                                                                                                                                                                          │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ delete  │ -p newest-cni-041709                                                                                                                                                                                                                          │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ start   │ -p auto-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-557095                  │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:20 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-033746 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ start   │ -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:20 UTC │
	│ ssh     │ -p auto-557095 pgrep -a kubelet                                                                                                                                                                                                               │ auto-557095                  │ jenkins │ v1.37.0 │ 13 Oct 25 23:20 UTC │ 13 Oct 25 23:20 UTC │
	│ image   │ default-k8s-diff-port-033746 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:20 UTC │ 13 Oct 25 23:20 UTC │
	│ pause   │ -p default-k8s-diff-port-033746 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:19:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:19:16.927489  639746 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:19:16.927638  639746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:19:16.927662  639746 out.go:374] Setting ErrFile to fd 2...
	I1013 23:19:16.927685  639746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:19:16.927975  639746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:19:16.928382  639746 out.go:368] Setting JSON to false
	I1013 23:19:16.929264  639746 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10893,"bootTime":1760386664,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:19:16.929330  639746 start.go:141] virtualization:  
	I1013 23:19:16.973359  639746 out.go:179] * [default-k8s-diff-port-033746] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:19:17.016525  639746 notify.go:220] Checking for updates...
	I1013 23:19:17.016541  639746 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:19:17.049541  639746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:19:17.086005  639746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:19:17.110690  639746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:19:17.149001  639746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:19:17.167262  639746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:19:17.200110  639746 config.go:182] Loaded profile config "default-k8s-diff-port-033746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:19:17.200727  639746 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:19:17.222584  639746 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:19:17.222716  639746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:19:17.280630  639746 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-13 23:19:17.271316572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:19:17.280745  639746 docker.go:318] overlay module found
	I1013 23:19:17.290418  639746 out.go:179] * Using the docker driver based on existing profile
	I1013 23:19:17.316716  639746 start.go:305] selected driver: docker
	I1013 23:19:17.316741  639746 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:19:17.316856  639746 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:19:17.317539  639746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:19:17.375641  639746 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-13 23:19:17.365556256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:19:17.376024  639746 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:19:17.376064  639746 cni.go:84] Creating CNI manager for ""
	I1013 23:19:17.376124  639746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:19:17.376164  639746 start.go:349] cluster config:
	{Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:19:17.385073  639746 out.go:179] * Starting "default-k8s-diff-port-033746" primary control-plane node in "default-k8s-diff-port-033746" cluster
	I1013 23:19:17.388802  639746 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:19:17.394382  639746 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:19:17.398035  639746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:19:17.398100  639746 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:19:17.398133  639746 cache.go:58] Caching tarball of preloaded images
	I1013 23:19:17.398159  639746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:19:17.398228  639746 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:19:17.398239  639746 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:19:17.398352  639746 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/config.json ...
	I1013 23:19:17.423507  639746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:19:17.423546  639746 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:19:17.423568  639746 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:19:17.423615  639746 start.go:360] acquireMachinesLock for default-k8s-diff-port-033746: {Name:mk4950372c3cd6b03a758b4772e5c43a69d20962 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:19:17.423692  639746 start.go:364] duration metric: took 56.319µs to acquireMachinesLock for "default-k8s-diff-port-033746"
	I1013 23:19:17.423715  639746 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:19:17.423727  639746 fix.go:54] fixHost starting: 
	I1013 23:19:17.424096  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:17.445794  639746 fix.go:112] recreateIfNeeded on default-k8s-diff-port-033746: state=Stopped err=<nil>
	W1013 23:19:17.445822  639746 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 23:19:12.921071  639302 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 23:19:12.921301  639302 start.go:159] libmachine.API.Create for "auto-557095" (driver="docker")
	I1013 23:19:12.921358  639302 client.go:168] LocalClient.Create starting
	I1013 23:19:12.921428  639302 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem
	I1013 23:19:12.921464  639302 main.go:141] libmachine: Decoding PEM data...
	I1013 23:19:12.921480  639302 main.go:141] libmachine: Parsing certificate...
	I1013 23:19:12.921538  639302 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem
	I1013 23:19:12.921571  639302 main.go:141] libmachine: Decoding PEM data...
	I1013 23:19:12.921585  639302 main.go:141] libmachine: Parsing certificate...
	I1013 23:19:12.921963  639302 cli_runner.go:164] Run: docker network inspect auto-557095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 23:19:12.937558  639302 cli_runner.go:211] docker network inspect auto-557095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 23:19:12.937651  639302 network_create.go:284] running [docker network inspect auto-557095] to gather additional debugging logs...
	I1013 23:19:12.937673  639302 cli_runner.go:164] Run: docker network inspect auto-557095
	W1013 23:19:12.951481  639302 cli_runner.go:211] docker network inspect auto-557095 returned with exit code 1
	I1013 23:19:12.951514  639302 network_create.go:287] error running [docker network inspect auto-557095]: docker network inspect auto-557095: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-557095 not found
	I1013 23:19:12.951539  639302 network_create.go:289] output of [docker network inspect auto-557095]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-557095 not found
	
	** /stderr **
	I1013 23:19:12.951627  639302 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:19:12.968675  639302 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-daf8f67114ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:2a:b3:49:6d:63} reservation:<nil>}
	I1013 23:19:12.968875  639302 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-57d99f1e9609 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:17:72:4c:c8:ba} reservation:<nil>}
	I1013 23:19:12.969160  639302 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-33ec4a6ec514 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:b6:7d:bc:fd} reservation:<nil>}
	I1013 23:19:12.969572  639302 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a08f90}
	I1013 23:19:12.969596  639302 network_create.go:124] attempt to create docker network auto-557095 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 23:19:12.969653  639302 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-557095 auto-557095
	I1013 23:19:13.032590  639302 network_create.go:108] docker network auto-557095 192.168.76.0/24 created
	I1013 23:19:13.032623  639302 kic.go:121] calculated static IP "192.168.76.2" for the "auto-557095" container
	I1013 23:19:13.032699  639302 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 23:19:13.049233  639302 cli_runner.go:164] Run: docker volume create auto-557095 --label name.minikube.sigs.k8s.io=auto-557095 --label created_by.minikube.sigs.k8s.io=true
	I1013 23:19:13.067324  639302 oci.go:103] Successfully created a docker volume auto-557095
	I1013 23:19:13.067414  639302 cli_runner.go:164] Run: docker run --rm --name auto-557095-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-557095 --entrypoint /usr/bin/test -v auto-557095:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 23:19:13.578758  639302 oci.go:107] Successfully prepared a docker volume auto-557095
	I1013 23:19:13.578809  639302 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:19:13.578830  639302 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 23:19:13.578906  639302 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-557095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 23:19:17.427454  639302 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-557095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (3.848509371s)
	I1013 23:19:17.427493  639302 kic.go:203] duration metric: took 3.848660113s to extract preloaded images to volume ...
	W1013 23:19:17.427623  639302 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 23:19:17.427737  639302 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 23:19:17.488625  639302 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-557095 --name auto-557095 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-557095 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-557095 --network auto-557095 --ip 192.168.76.2 --volume auto-557095:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 23:19:17.449818  639746 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-033746" ...
	I1013 23:19:17.449915  639746 cli_runner.go:164] Run: docker start default-k8s-diff-port-033746
	I1013 23:19:17.816321  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:17.843928  639746 kic.go:430] container "default-k8s-diff-port-033746" state is running.
	I1013 23:19:17.846520  639746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:19:17.873626  639746 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/config.json ...
	I1013 23:19:17.873851  639746 machine.go:93] provisionDockerMachine start ...
	I1013 23:19:17.873909  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:17.908962  639746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:17.909277  639746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1013 23:19:17.909286  639746 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:19:17.909887  639746 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59696->127.0.0.1:33489: read: connection reset by peer
	I1013 23:19:21.054703  639746 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-033746
	
	I1013 23:19:21.054728  639746 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-033746"
	I1013 23:19:21.054801  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:21.072469  639746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:21.072789  639746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1013 23:19:21.072809  639746 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-033746 && echo "default-k8s-diff-port-033746" | sudo tee /etc/hostname
	I1013 23:19:21.228269  639746 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-033746
	
	I1013 23:19:21.228358  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:21.246504  639746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:21.246855  639746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1013 23:19:21.246875  639746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-033746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-033746/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-033746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:19:21.395363  639746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:19:21.395388  639746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:19:21.395414  639746 ubuntu.go:190] setting up certificates
	I1013 23:19:21.395425  639746 provision.go:84] configureAuth start
	I1013 23:19:21.395493  639746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:19:21.411973  639746 provision.go:143] copyHostCerts
	I1013 23:19:21.412055  639746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:19:21.412077  639746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:19:21.412157  639746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:19:21.412262  639746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:19:21.412272  639746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:19:21.412300  639746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:19:21.412366  639746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:19:21.412380  639746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:19:21.412406  639746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:19:21.412468  639746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-033746 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-033746 localhost minikube]
	I1013 23:19:21.622522  639746 provision.go:177] copyRemoteCerts
	I1013 23:19:21.622594  639746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:19:21.622640  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:21.639445  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:21.743007  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:19:21.760509  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 23:19:21.778144  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:19:21.795704  639746 provision.go:87] duration metric: took 400.261571ms to configureAuth
	I1013 23:19:21.795729  639746 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:19:21.795919  639746 config.go:182] Loaded profile config "default-k8s-diff-port-033746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:19:21.796051  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:21.813097  639746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:21.813397  639746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1013 23:19:21.813419  639746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:19:17.970467  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Running}}
	I1013 23:19:17.993111  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:18.027674  639302 cli_runner.go:164] Run: docker exec auto-557095 stat /var/lib/dpkg/alternatives/iptables
	I1013 23:19:18.092554  639302 oci.go:144] the created container "auto-557095" has a running status.
	I1013 23:19:18.092592  639302 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa...
	I1013 23:19:18.782917  639302 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 23:19:18.820597  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:18.845301  639302 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 23:19:18.845320  639302 kic_runner.go:114] Args: [docker exec --privileged auto-557095 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 23:19:18.910229  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:18.936511  639302 machine.go:93] provisionDockerMachine start ...
	I1013 23:19:18.936600  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:18.968705  639302 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:18.969040  639302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1013 23:19:18.969049  639302 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:19:18.969831  639302 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56242->127.0.0.1:33494: read: connection reset by peer
	I1013 23:19:22.122952  639302 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-557095
	
	I1013 23:19:22.123023  639302 ubuntu.go:182] provisioning hostname "auto-557095"
	I1013 23:19:22.123174  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:22.143020  639302 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:22.143353  639302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1013 23:19:22.143366  639302 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-557095 && echo "auto-557095" | sudo tee /etc/hostname
	I1013 23:19:22.324314  639302 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-557095
	
	I1013 23:19:22.324405  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:22.348716  639302 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:22.349037  639302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1013 23:19:22.349060  639302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-557095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-557095/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-557095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:19:22.514104  639302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:19:22.514154  639302 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:19:22.514180  639302 ubuntu.go:190] setting up certificates
	I1013 23:19:22.514190  639302 provision.go:84] configureAuth start
	I1013 23:19:22.514255  639302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-557095
	I1013 23:19:22.559729  639302 provision.go:143] copyHostCerts
	I1013 23:19:22.559804  639302 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:19:22.559817  639302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:19:22.559889  639302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:19:22.559989  639302 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:19:22.560002  639302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:19:22.560026  639302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:19:22.560082  639302 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:19:22.560090  639302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:19:22.560111  639302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:19:22.560165  639302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.auto-557095 san=[127.0.0.1 192.168.76.2 auto-557095 localhost minikube]
	I1013 23:19:22.164048  639746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:19:22.164068  639746 machine.go:96] duration metric: took 4.290208164s to provisionDockerMachine
	I1013 23:19:22.164079  639746 start.go:293] postStartSetup for "default-k8s-diff-port-033746" (driver="docker")
	I1013 23:19:22.164090  639746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:19:22.164171  639746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:19:22.164214  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:22.191277  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:22.307816  639746 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:19:22.311521  639746 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:19:22.311550  639746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:19:22.311561  639746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:19:22.311614  639746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:19:22.311695  639746 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:19:22.311796  639746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:19:22.319531  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:19:22.345089  639746 start.go:296] duration metric: took 180.993956ms for postStartSetup
	I1013 23:19:22.345185  639746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:19:22.345242  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:22.370373  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:22.480322  639746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:19:22.485510  639746 fix.go:56] duration metric: took 5.06178212s for fixHost
	I1013 23:19:22.485534  639746 start.go:83] releasing machines lock for "default-k8s-diff-port-033746", held for 5.061831416s
	I1013 23:19:22.485607  639746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:19:22.502390  639746 ssh_runner.go:195] Run: cat /version.json
	I1013 23:19:22.502442  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:22.502482  639746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:19:22.502535  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:22.528273  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:22.529497  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:22.639355  639746 ssh_runner.go:195] Run: systemctl --version
	I1013 23:19:22.735670  639746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:19:22.789577  639746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:19:22.794199  639746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:19:22.794264  639746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:19:22.802705  639746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:19:22.802726  639746 start.go:495] detecting cgroup driver to use...
	I1013 23:19:22.802757  639746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:19:22.802802  639746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:19:22.819801  639746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:19:22.834444  639746 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:19:22.834505  639746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:19:22.851224  639746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:19:22.865940  639746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:19:23.046289  639746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:19:23.195146  639746 docker.go:234] disabling docker service ...
	I1013 23:19:23.195224  639746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:19:23.211185  639746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:19:23.226839  639746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:19:23.357051  639746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:19:23.524439  639746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:19:23.538480  639746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:19:23.555973  639746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:19:23.556044  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.565978  639746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:19:23.566059  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.575304  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.583990  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.592931  639746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:19:23.601082  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.609946  639746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.621062  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.632390  639746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:19:23.642116  639746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:19:23.650402  639746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:23.801124  639746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:19:23.975207  639746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:19:23.975273  639746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:19:23.979997  639746 start.go:563] Will wait 60s for crictl version
	I1013 23:19:23.980073  639746 ssh_runner.go:195] Run: which crictl
	I1013 23:19:23.983783  639746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:19:24.024549  639746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:19:24.024642  639746 ssh_runner.go:195] Run: crio --version
	I1013 23:19:24.059374  639746 ssh_runner.go:195] Run: crio --version
	I1013 23:19:24.100092  639746 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:19:22.935263  639302 provision.go:177] copyRemoteCerts
	I1013 23:19:22.935331  639302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:19:22.935380  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:22.977178  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:23.079333  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:19:23.102520  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 23:19:23.141483  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:19:23.161989  639302 provision.go:87] duration metric: took 647.777406ms to configureAuth
	I1013 23:19:23.162066  639302 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:19:23.162303  639302 config.go:182] Loaded profile config "auto-557095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:19:23.162452  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:23.181157  639302 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:23.181454  639302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1013 23:19:23.181468  639302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:19:23.490519  639302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:19:23.490545  639302 machine.go:96] duration metric: took 4.554014075s to provisionDockerMachine
	I1013 23:19:23.490574  639302 client.go:171] duration metric: took 10.569188867s to LocalClient.Create
	I1013 23:19:23.490588  639302 start.go:167] duration metric: took 10.569289353s to libmachine.API.Create "auto-557095"
	I1013 23:19:23.490596  639302 start.go:293] postStartSetup for "auto-557095" (driver="docker")
	I1013 23:19:23.490609  639302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:19:23.490682  639302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:19:23.490728  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:23.515474  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:23.624070  639302 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:19:23.628150  639302 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:19:23.628179  639302 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:19:23.628190  639302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:19:23.628241  639302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:19:23.628320  639302 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:19:23.628444  639302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:19:23.638131  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:19:23.661852  639302 start.go:296] duration metric: took 171.238762ms for postStartSetup
	I1013 23:19:23.662290  639302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-557095
	I1013 23:19:23.679887  639302 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/config.json ...
	I1013 23:19:23.680157  639302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:19:23.680202  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:23.713630  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:23.821957  639302 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:19:23.828213  639302 start.go:128] duration metric: took 10.910522077s to createHost
	I1013 23:19:23.828243  639302 start.go:83] releasing machines lock for "auto-557095", held for 10.910657476s
	I1013 23:19:23.828398  639302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-557095
	I1013 23:19:23.846977  639302 ssh_runner.go:195] Run: cat /version.json
	I1013 23:19:23.847028  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:23.847050  639302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:19:23.847200  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:23.883350  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:23.891226  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:24.079815  639302 ssh_runner.go:195] Run: systemctl --version
	I1013 23:19:24.087363  639302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:19:24.144903  639302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:19:24.151259  639302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:19:24.151325  639302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:19:24.188755  639302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 23:19:24.188775  639302 start.go:495] detecting cgroup driver to use...
	I1013 23:19:24.188808  639302 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:19:24.188864  639302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:19:24.210214  639302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:19:24.227148  639302 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:19:24.227298  639302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:19:24.251001  639302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:19:24.270312  639302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:19:24.454434  639302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:19:24.641643  639302 docker.go:234] disabling docker service ...
	I1013 23:19:24.641699  639302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:19:24.673295  639302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:19:24.689277  639302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:19:24.888339  639302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:19:25.063664  639302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:19:25.078591  639302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:19:25.094791  639302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:19:25.094871  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.104470  639302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:19:25.104553  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.114313  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.124191  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.138561  639302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:19:25.148001  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.157522  639302 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.173350  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.185572  639302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:19:25.194549  639302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:19:25.203664  639302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:25.395584  639302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:19:25.621794  639302 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:19:25.621864  639302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:19:25.626181  639302 start.go:563] Will wait 60s for crictl version
	I1013 23:19:25.626246  639302 ssh_runner.go:195] Run: which crictl
	I1013 23:19:25.630432  639302 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:19:25.684186  639302 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:19:25.684273  639302 ssh_runner.go:195] Run: crio --version
	I1013 23:19:25.733376  639302 ssh_runner.go:195] Run: crio --version
	I1013 23:19:25.798693  639302 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:19:24.103164  639746 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-033746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:19:24.120871  639746 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 23:19:24.125308  639746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:19:24.136732  639746 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:19:24.136850  639746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:19:24.136904  639746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:19:24.174594  639746 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:19:24.174613  639746 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:19:24.174667  639746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:19:24.212841  639746 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:19:24.212911  639746 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:19:24.212936  639746 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1013 23:19:24.213057  639746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-033746 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:19:24.213160  639746 ssh_runner.go:195] Run: crio config
	I1013 23:19:24.308274  639746 cni.go:84] Creating CNI manager for ""
	I1013 23:19:24.308299  639746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:19:24.308350  639746 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:19:24.308381  639746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-033746 NodeName:default-k8s-diff-port-033746 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:19:24.308603  639746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-033746"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:19:24.308699  639746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:19:24.324749  639746 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:19:24.324911  639746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:19:24.334287  639746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1013 23:19:24.352282  639746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:19:24.366680  639746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1013 23:19:24.380595  639746 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:19:24.385310  639746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:19:24.395820  639746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:24.565488  639746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:19:24.594294  639746 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746 for IP: 192.168.85.2
	I1013 23:19:24.594324  639746 certs.go:195] generating shared ca certs ...
	I1013 23:19:24.594341  639746 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:24.594550  639746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:19:24.594639  639746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:19:24.594655  639746 certs.go:257] generating profile certs ...
	I1013 23:19:24.594780  639746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.key
	I1013 23:19:24.594891  639746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key.5040eb68
	I1013 23:19:24.594960  639746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key
	I1013 23:19:24.595131  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:19:24.595192  639746 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:19:24.595207  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:19:24.595253  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:19:24.595299  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:19:24.595349  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:19:24.595425  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:19:24.596114  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:19:24.641041  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:19:24.704536  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:19:24.737058  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:19:24.790669  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 23:19:24.835663  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 23:19:24.900200  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:19:24.917493  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 23:19:24.937041  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:19:24.979811  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:19:25.005140  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:19:25.029435  639746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:19:25.043963  639746 ssh_runner.go:195] Run: openssl version
	I1013 23:19:25.050691  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:19:25.060440  639746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:19:25.067835  639746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:19:25.067987  639746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:19:25.118280  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:19:25.128500  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:19:25.138690  639746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:19:25.143482  639746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:19:25.143560  639746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:19:25.189172  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:19:25.198170  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:19:25.207954  639746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:25.213398  639746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:25.213478  639746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:25.264149  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:19:25.272420  639746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:19:25.284593  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:19:25.352756  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:19:25.443026  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:19:25.510921  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:19:25.636432  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:19:25.756697  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:19:25.835496  639746 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:19:25.835585  639746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:19:25.835642  639746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:19:25.924948  639746 cri.go:89] found id: "066ad3d69ea84808c078d93b1f6265cfd21518d17a5db054d1b69f87ca56e952"
	I1013 23:19:25.924971  639746 cri.go:89] found id: "cab38f78f2c2f085857d4a3efa0373a4a503447eebfd8334b6524ca0ec415a07"
	I1013 23:19:25.924980  639746 cri.go:89] found id: "3f7f4bc1a19c7b8ca9e580a8effb1d745cb76de4a5ab7542321977f3bf56b636"
	I1013 23:19:25.924983  639746 cri.go:89] found id: "4e7274aa9666913e174875ca758f5279a206c60e735c947c6cd3cf7e67e99d2b"
	I1013 23:19:25.924987  639746 cri.go:89] found id: ""
	I1013 23:19:25.925037  639746 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 23:19:25.971412  639746 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:19:25Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:19:25.971509  639746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:19:25.995680  639746 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:19:25.995701  639746 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:19:25.995774  639746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:19:26.019441  639746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:19:26.019876  639746 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-033746" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:19:26.019997  639746 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-428797/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-033746" cluster setting kubeconfig missing "default-k8s-diff-port-033746" context setting]
	I1013 23:19:26.020356  639746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.021744  639746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:19:26.037804  639746 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 23:19:26.037851  639746 kubeadm.go:601] duration metric: took 42.14235ms to restartPrimaryControlPlane
	I1013 23:19:26.037860  639746 kubeadm.go:402] duration metric: took 202.376268ms to StartCluster
	I1013 23:19:26.037877  639746 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.037949  639746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:19:26.038680  639746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.038953  639746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:19:26.039210  639746 config.go:182] Loaded profile config "default-k8s-diff-port-033746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:19:26.039257  639746 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:19:26.039320  639746 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-033746"
	I1013 23:19:26.039334  639746 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-033746"
	W1013 23:19:26.039340  639746 addons.go:247] addon storage-provisioner should already be in state true
	I1013 23:19:26.039347  639746 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-033746"
	I1013 23:19:26.039361  639746 host.go:66] Checking if "default-k8s-diff-port-033746" exists ...
	I1013 23:19:26.039366  639746 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-033746"
	W1013 23:19:26.039372  639746 addons.go:247] addon dashboard should already be in state true
	I1013 23:19:26.039392  639746 host.go:66] Checking if "default-k8s-diff-port-033746" exists ...
	I1013 23:19:26.039809  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:26.039827  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:26.040252  639746 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-033746"
	I1013 23:19:26.040278  639746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-033746"
	I1013 23:19:26.040573  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:26.043582  639746 out.go:179] * Verifying Kubernetes components...
	I1013 23:19:26.048351  639746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:26.117941  639746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:19:26.119805  639746 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-033746"
	W1013 23:19:26.119826  639746 addons.go:247] addon default-storageclass should already be in state true
	I1013 23:19:26.119852  639746 host.go:66] Checking if "default-k8s-diff-port-033746" exists ...
	I1013 23:19:26.120958  639746 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:19:26.120978  639746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:19:26.121041  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:26.121338  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:26.124811  639746 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 23:19:26.131223  639746 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 23:19:26.134199  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 23:19:26.134231  639746 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 23:19:26.134303  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:26.177050  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:26.183302  639746 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:19:26.183321  639746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:19:26.183382  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:26.183903  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:26.212776  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:26.432196  639746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:19:26.505057  639746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:19:26.507558  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 23:19:26.507577  639746 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 23:19:26.540337  639746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:19:26.623405  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 23:19:26.623477  639746 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 23:19:26.767680  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 23:19:26.767755  639746 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 23:19:26.883685  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 23:19:26.883759  639746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 23:19:25.801836  639302 cli_runner.go:164] Run: docker network inspect auto-557095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:19:25.826727  639302 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 23:19:25.831160  639302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:19:25.855682  639302 kubeadm.go:883] updating cluster {Name:auto-557095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-557095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:19:25.855797  639302 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:19:25.855865  639302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:19:25.904337  639302 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:19:25.904364  639302 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:19:25.904433  639302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:19:25.940769  639302 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:19:25.940796  639302 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:19:25.940804  639302 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 23:19:25.940894  639302 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-557095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-557095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:19:25.940974  639302 ssh_runner.go:195] Run: crio config
	I1013 23:19:26.086948  639302 cni.go:84] Creating CNI manager for ""
	I1013 23:19:26.086976  639302 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:19:26.087000  639302 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:19:26.087024  639302 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-557095 NodeName:auto-557095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:19:26.087281  639302 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-557095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:19:26.087357  639302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:19:26.097516  639302 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:19:26.097590  639302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:19:26.116814  639302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1013 23:19:26.142350  639302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:19:26.217477  639302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1013 23:19:26.233501  639302 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:19:26.243610  639302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:19:26.271578  639302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:26.497058  639302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:19:26.549185  639302 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095 for IP: 192.168.76.2
	I1013 23:19:26.549219  639302 certs.go:195] generating shared ca certs ...
	I1013 23:19:26.549240  639302 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.549379  639302 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:19:26.549428  639302 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:19:26.549440  639302 certs.go:257] generating profile certs ...
	I1013 23:19:26.549492  639302 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.key
	I1013 23:19:26.549508  639302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt with IP's: []
	I1013 23:19:26.744846  639302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt ...
	I1013 23:19:26.744881  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: {Name:mk0fc3d55b404b59e78fcc97a03a72c2430acd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.745073  639302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.key ...
	I1013 23:19:26.745089  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.key: {Name:mkb7aa04b5f4ff3645d883ff3cba98a0fd4ee60b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.745174  639302 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key.c6e86103
	I1013 23:19:26.745194  639302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt.c6e86103 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 23:19:27.018805  639302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt.c6e86103 ...
	I1013 23:19:27.018840  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt.c6e86103: {Name:mk8ec37282c626c1658f4976f369f92f39f4bf71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:27.019068  639302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key.c6e86103 ...
	I1013 23:19:27.019103  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key.c6e86103: {Name:mk43d5c29e5b5df93e61c350ece6b1d46dcec909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:27.019226  639302 certs.go:382] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt.c6e86103 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt
	I1013 23:19:27.019319  639302 certs.go:386] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key.c6e86103 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key
	I1013 23:19:27.019380  639302 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.key
	I1013 23:19:27.019399  639302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.crt with IP's: []
	I1013 23:19:27.517916  639302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.crt ...
	I1013 23:19:27.517951  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.crt: {Name:mk59fa72232a9083545ed59272c701500ce02942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:27.518186  639302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.key ...
	I1013 23:19:27.518201  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.key: {Name:mkf5ee79cec32ff3de08ba41e9f0c35e7d1f8c81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:27.518412  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:19:27.518460  639302 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:19:27.518477  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:19:27.518500  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:19:27.518527  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:19:27.518553  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:19:27.518608  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:19:27.525289  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:19:27.595468  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:19:27.652958  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:19:27.689376  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:19:27.720324  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1013 23:19:27.750152  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 23:19:27.778212  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:19:27.801976  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 23:19:27.835744  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:19:27.864400  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:19:27.893721  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:19:27.922998  639302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:19:27.947044  639302 ssh_runner.go:195] Run: openssl version
	I1013 23:19:27.955741  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:19:27.969411  639302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:19:27.974157  639302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:19:27.974298  639302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:19:28.020576  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:19:28.031337  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:19:28.041908  639302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:28.046843  639302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:28.046968  639302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:28.090780  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:19:28.100946  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:19:28.111004  639302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:19:28.119579  639302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:19:28.119703  639302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:19:28.162509  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:19:28.172311  639302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:19:28.177388  639302 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 23:19:28.177500  639302 kubeadm.go:400] StartCluster: {Name:auto-557095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-557095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:19:28.177627  639302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:19:28.177728  639302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:19:28.217737  639302 cri.go:89] found id: ""
	I1013 23:19:28.217888  639302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:19:28.226561  639302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 23:19:28.234738  639302 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 23:19:28.234808  639302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 23:19:28.251440  639302 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 23:19:28.251461  639302 kubeadm.go:157] found existing configuration files:
	
	I1013 23:19:28.251514  639302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 23:19:28.264032  639302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 23:19:28.264102  639302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 23:19:28.276389  639302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 23:19:28.287370  639302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 23:19:28.287439  639302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 23:19:28.299505  639302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 23:19:28.315593  639302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 23:19:28.315666  639302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 23:19:28.323060  639302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 23:19:28.336505  639302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 23:19:28.336578  639302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 23:19:28.346768  639302 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 23:19:28.430736  639302 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 23:19:28.430817  639302 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 23:19:28.475257  639302 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 23:19:28.475333  639302 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 23:19:28.475384  639302 kubeadm.go:318] OS: Linux
	I1013 23:19:28.475433  639302 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 23:19:28.475494  639302 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 23:19:28.475544  639302 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 23:19:28.475594  639302 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 23:19:28.475673  639302 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 23:19:28.475744  639302 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 23:19:28.475796  639302 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 23:19:28.475852  639302 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 23:19:28.475905  639302 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 23:19:28.651754  639302 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 23:19:28.651873  639302 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 23:19:28.651972  639302 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 23:19:28.669285  639302 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 23:19:26.983579  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 23:19:26.983663  639746 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 23:19:27.036225  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 23:19:27.036300  639746 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 23:19:27.083429  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 23:19:27.083505  639746 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 23:19:27.128175  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 23:19:27.128253  639746 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 23:19:27.172188  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:19:27.172272  639746 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 23:19:27.204172  639746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:19:28.673215  639302 out.go:252]   - Generating certificates and keys ...
	I1013 23:19:28.673314  639302 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 23:19:28.673423  639302 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 23:19:29.054617  639302 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 23:19:30.577463  639302 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 23:19:31.151496  639302 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 23:19:31.239488  639302 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 23:19:31.443458  639302 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 23:19:31.443594  639302 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-557095 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 23:19:32.295435  639302 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 23:19:32.295569  639302 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-557095 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 23:19:32.689720  639746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.257438158s)
	I1013 23:19:35.717006  639746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.211917464s)
	I1013 23:19:35.717101  639746 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.17674318s)
	I1013 23:19:35.717157  639746 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-033746" to be "Ready" ...
	I1013 23:19:35.726045  639746 node_ready.go:49] node "default-k8s-diff-port-033746" is "Ready"
	I1013 23:19:35.726086  639746 node_ready.go:38] duration metric: took 8.908852ms for node "default-k8s-diff-port-033746" to be "Ready" ...
	I1013 23:19:35.726124  639746 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:19:35.726201  639746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:19:35.841511  639746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.637218136s)
	I1013 23:19:35.841773  639746 api_server.go:72] duration metric: took 9.802781824s to wait for apiserver process to appear ...
	I1013 23:19:35.841791  639746 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:19:35.841818  639746 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1013 23:19:35.844816  639746 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-033746 addons enable metrics-server
	
	I1013 23:19:35.847705  639746 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1013 23:19:33.276486  639302 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 23:19:33.557718  639302 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 23:19:34.521548  639302 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 23:19:34.522022  639302 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 23:19:35.081508  639302 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 23:19:35.214680  639302 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 23:19:35.979439  639302 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 23:19:36.258870  639302 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 23:19:36.572789  639302 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 23:19:36.573948  639302 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 23:19:36.577028  639302 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 23:19:35.850563  639746 addons.go:514] duration metric: took 9.811293999s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1013 23:19:35.856358  639746 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1013 23:19:35.860226  639746 api_server.go:141] control plane version: v1.34.1
	I1013 23:19:35.860258  639746 api_server.go:131] duration metric: took 18.45958ms to wait for apiserver health ...
	I1013 23:19:35.860268  639746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:19:35.867708  639746 system_pods.go:59] 8 kube-system pods found
	I1013 23:19:35.867751  639746 system_pods.go:61] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:19:35.867761  639746 system_pods.go:61] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:19:35.867768  639746 system_pods.go:61] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:19:35.867792  639746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:19:35.867802  639746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:19:35.867812  639746 system_pods.go:61] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:19:35.867822  639746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:19:35.867833  639746 system_pods.go:61] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Running
	I1013 23:19:35.867840  639746 system_pods.go:74] duration metric: took 7.561516ms to wait for pod list to return data ...
	I1013 23:19:35.867856  639746 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:19:35.876839  639746 default_sa.go:45] found service account: "default"
	I1013 23:19:35.876869  639746 default_sa.go:55] duration metric: took 9.003875ms for default service account to be created ...
	I1013 23:19:35.876889  639746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:19:35.894506  639746 system_pods.go:86] 8 kube-system pods found
	I1013 23:19:35.894554  639746 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:19:35.894565  639746 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:19:35.894571  639746 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:19:35.894578  639746 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:19:35.894585  639746 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:19:35.894592  639746 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:19:35.894603  639746 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:19:35.894620  639746 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Running
	I1013 23:19:35.894628  639746 system_pods.go:126] duration metric: took 17.731594ms to wait for k8s-apps to be running ...
	I1013 23:19:35.894640  639746 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:19:35.894702  639746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:19:35.927066  639746 system_svc.go:56] duration metric: took 32.4157ms WaitForService to wait for kubelet
	I1013 23:19:35.927122  639746 kubeadm.go:586] duration metric: took 9.888131087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:19:35.927142  639746 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:19:35.934390  639746 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:19:35.934437  639746 node_conditions.go:123] node cpu capacity is 2
	I1013 23:19:35.934451  639746 node_conditions.go:105] duration metric: took 7.302648ms to run NodePressure ...
	I1013 23:19:35.934468  639746 start.go:241] waiting for startup goroutines ...
	I1013 23:19:35.934481  639746 start.go:246] waiting for cluster config update ...
	I1013 23:19:35.934493  639746 start.go:255] writing updated cluster config ...
	I1013 23:19:35.934848  639746 ssh_runner.go:195] Run: rm -f paused
	I1013 23:19:35.940121  639746 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:19:35.950632  639746 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qf4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:19:36.581769  639302 out.go:252]   - Booting up control plane ...
	I1013 23:19:36.581901  639302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 23:19:36.582358  639302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 23:19:36.585491  639302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 23:19:36.609331  639302 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 23:19:36.609450  639302 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 23:19:36.618813  639302 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 23:19:36.618917  639302 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 23:19:36.618959  639302 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 23:19:36.839360  639302 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 23:19:36.839485  639302 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1013 23:19:37.956636  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:39.959553  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	I1013 23:19:37.843602  639302 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001077633s
	I1013 23:19:37.843805  639302 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 23:19:37.844094  639302 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1013 23:19:37.844213  639302 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 23:19:37.844800  639302 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1013 23:19:41.966901  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:44.457554  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	I1013 23:19:43.148826  639302 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.303819182s
	I1013 23:19:45.517018  639302 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.671784319s
	I1013 23:19:47.348652  639302 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.50211531s
	I1013 23:19:47.373372  639302 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 23:19:47.392751  639302 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 23:19:47.414075  639302 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 23:19:47.414570  639302 kubeadm.go:318] [mark-control-plane] Marking the node auto-557095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 23:19:47.430845  639302 kubeadm.go:318] [bootstrap-token] Using token: fl9uev.ja78svxq4m6apyxu
	I1013 23:19:47.433796  639302 out.go:252]   - Configuring RBAC rules ...
	I1013 23:19:47.433921  639302 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 23:19:47.441880  639302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 23:19:47.456252  639302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 23:19:47.462686  639302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 23:19:47.469093  639302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 23:19:47.476679  639302 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 23:19:47.753880  639302 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 23:19:48.206625  639302 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 23:19:48.754130  639302 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 23:19:48.755354  639302 kubeadm.go:318] 
	I1013 23:19:48.755438  639302 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 23:19:48.755451  639302 kubeadm.go:318] 
	I1013 23:19:48.755532  639302 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 23:19:48.755540  639302 kubeadm.go:318] 
	I1013 23:19:48.755573  639302 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 23:19:48.755639  639302 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 23:19:48.755701  639302 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 23:19:48.755709  639302 kubeadm.go:318] 
	I1013 23:19:48.755768  639302 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 23:19:48.755776  639302 kubeadm.go:318] 
	I1013 23:19:48.755826  639302 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 23:19:48.755835  639302 kubeadm.go:318] 
	I1013 23:19:48.755889  639302 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 23:19:48.755970  639302 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 23:19:48.756044  639302 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 23:19:48.756051  639302 kubeadm.go:318] 
	I1013 23:19:48.756138  639302 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 23:19:48.756221  639302 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 23:19:48.756229  639302 kubeadm.go:318] 
	I1013 23:19:48.756316  639302 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fl9uev.ja78svxq4m6apyxu \
	I1013 23:19:48.756433  639302 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 \
	I1013 23:19:48.756459  639302 kubeadm.go:318] 	--control-plane 
	I1013 23:19:48.756466  639302 kubeadm.go:318] 
	I1013 23:19:48.756555  639302 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 23:19:48.756565  639302 kubeadm.go:318] 
	I1013 23:19:48.756650  639302 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fl9uev.ja78svxq4m6apyxu \
	I1013 23:19:48.756758  639302 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 
	I1013 23:19:48.761191  639302 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 23:19:48.761457  639302 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 23:19:48.761586  639302 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 23:19:48.761625  639302 cni.go:84] Creating CNI manager for ""
	I1013 23:19:48.761633  639302 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:19:48.766751  639302 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1013 23:19:46.957000  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:49.456819  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	I1013 23:19:48.769762  639302 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 23:19:48.773854  639302 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 23:19:48.773879  639302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 23:19:48.787688  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 23:19:49.111867  639302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 23:19:49.112033  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:49.112127  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-557095 minikube.k8s.io/updated_at=2025_10_13T23_19_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=auto-557095 minikube.k8s.io/primary=true
	I1013 23:19:49.276873  639302 ops.go:34] apiserver oom_adj: -16
	I1013 23:19:49.276983  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:49.777567  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:50.277263  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:50.778011  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:51.277042  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:51.778037  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:52.278092  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:52.777549  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:53.277099  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:53.435922  639302 kubeadm.go:1113] duration metric: took 4.323959141s to wait for elevateKubeSystemPrivileges
	I1013 23:19:53.435948  639302 kubeadm.go:402] duration metric: took 25.25845552s to StartCluster
	I1013 23:19:53.435965  639302 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:53.436040  639302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:19:53.437028  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:53.437244  639302 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:19:53.437389  639302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 23:19:53.437638  639302 config.go:182] Loaded profile config "auto-557095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:19:53.437673  639302 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:19:53.437733  639302 addons.go:69] Setting storage-provisioner=true in profile "auto-557095"
	I1013 23:19:53.437747  639302 addons.go:238] Setting addon storage-provisioner=true in "auto-557095"
	I1013 23:19:53.437774  639302 host.go:66] Checking if "auto-557095" exists ...
	I1013 23:19:53.438273  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:53.438840  639302 addons.go:69] Setting default-storageclass=true in profile "auto-557095"
	I1013 23:19:53.438869  639302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-557095"
	I1013 23:19:53.439194  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:53.442113  639302 out.go:179] * Verifying Kubernetes components...
	I1013 23:19:53.458852  639302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:53.474408  639302 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:19:53.479200  639302 addons.go:238] Setting addon default-storageclass=true in "auto-557095"
	I1013 23:19:53.479241  639302 host.go:66] Checking if "auto-557095" exists ...
	I1013 23:19:53.479760  639302 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:19:53.479777  639302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:19:53.479830  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:53.480003  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:53.527277  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:53.532607  639302 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:19:53.532631  639302 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:19:53.532698  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:53.558384  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:53.804271  639302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:19:53.804632  639302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 23:19:53.807523  639302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:19:53.826462  639302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:19:53.857043  639302 node_ready.go:35] waiting up to 15m0s for node "auto-557095" to be "Ready" ...
	I1013 23:19:54.320471  639302 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1013 23:19:54.678706  639302 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1013 23:19:51.956615  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:53.956715  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:56.456816  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	I1013 23:19:54.681626  639302 addons.go:514] duration metric: took 1.243933963s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 23:19:54.825381  639302 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-557095" context rescaled to 1 replicas
	W1013 23:19:55.860151  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:19:58.956427  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:20:01.457274  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:57.860346  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:00.381021  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:03.956022  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:20:05.956499  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:20:02.860033  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:04.860402  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:07.360147  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	I1013 23:20:07.456553  639746 pod_ready.go:94] pod "coredns-66bc5c9577-qf4lq" is "Ready"
	I1013 23:20:07.456584  639746 pod_ready.go:86] duration metric: took 31.50591381s for pod "coredns-66bc5c9577-qf4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.459699  639746 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.486147  639746 pod_ready.go:94] pod "etcd-default-k8s-diff-port-033746" is "Ready"
	I1013 23:20:07.486176  639746 pod_ready.go:86] duration metric: took 26.447156ms for pod "etcd-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.489232  639746 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.494475  639746 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-033746" is "Ready"
	I1013 23:20:07.494552  639746 pod_ready.go:86] duration metric: took 5.288904ms for pod "kube-apiserver-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.497504  639746 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.654481  639746 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-033746" is "Ready"
	I1013 23:20:07.654512  639746 pod_ready.go:86] duration metric: took 156.970722ms for pod "kube-controller-manager-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.854785  639746 pod_ready.go:83] waiting for pod "kube-proxy-mxnv7" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:08.254510  639746 pod_ready.go:94] pod "kube-proxy-mxnv7" is "Ready"
	I1013 23:20:08.254546  639746 pod_ready.go:86] duration metric: took 399.723751ms for pod "kube-proxy-mxnv7" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:08.454714  639746 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:08.854321  639746 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-033746" is "Ready"
	I1013 23:20:08.854352  639746 pod_ready.go:86] duration metric: took 399.612729ms for pod "kube-scheduler-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:08.854366  639746 pod_ready.go:40] duration metric: took 32.914195813s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:20:08.923018  639746 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:20:08.928247  639746 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-033746" cluster and "default" namespace by default
	W1013 23:20:09.366240  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:11.859852  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:13.861094  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:16.359959  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:18.360293  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:20.360717  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:22.860743  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:24.862094  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:27.360748  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:29.860555  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:31.860715  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:33.860913  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	I1013 23:20:35.359868  639302 node_ready.go:49] node "auto-557095" is "Ready"
	I1013 23:20:35.359901  639302 node_ready.go:38] duration metric: took 41.502778675s for node "auto-557095" to be "Ready" ...
	I1013 23:20:35.359915  639302 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:20:35.359986  639302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:20:35.373372  639302 api_server.go:72] duration metric: took 41.936099931s to wait for apiserver process to appear ...
	I1013 23:20:35.373395  639302 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:20:35.373415  639302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:20:35.383278  639302 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 23:20:35.385592  639302 api_server.go:141] control plane version: v1.34.1
	I1013 23:20:35.385633  639302 api_server.go:131] duration metric: took 12.230297ms to wait for apiserver health ...
	I1013 23:20:35.385642  639302 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:20:35.390140  639302 system_pods.go:59] 8 kube-system pods found
	I1013 23:20:35.390256  639302 system_pods.go:61] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:20:35.390300  639302 system_pods.go:61] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:35.390333  639302 system_pods.go:61] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:35.390352  639302 system_pods.go:61] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:35.390391  639302 system_pods.go:61] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:35.390429  639302 system_pods.go:61] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:35.390448  639302 system_pods.go:61] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:35.390476  639302 system_pods.go:61] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:20:35.390530  639302 system_pods.go:74] duration metric: took 4.862411ms to wait for pod list to return data ...
	I1013 23:20:35.390573  639302 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:20:35.410226  639302 default_sa.go:45] found service account: "default"
	I1013 23:20:35.410265  639302 default_sa.go:55] duration metric: took 19.662888ms for default service account to be created ...
	I1013 23:20:35.410276  639302 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:20:35.422371  639302 system_pods.go:86] 8 kube-system pods found
	I1013 23:20:35.422415  639302 system_pods.go:89] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:20:35.422426  639302 system_pods.go:89] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:35.422433  639302 system_pods.go:89] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:35.422438  639302 system_pods.go:89] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:35.422442  639302 system_pods.go:89] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:35.422447  639302 system_pods.go:89] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:35.422451  639302 system_pods.go:89] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:35.422459  639302 system_pods.go:89] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:20:35.422489  639302 retry.go:31] will retry after 203.233916ms: missing components: kube-dns
	I1013 23:20:35.634644  639302 system_pods.go:86] 8 kube-system pods found
	I1013 23:20:35.634683  639302 system_pods.go:89] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:20:35.634691  639302 system_pods.go:89] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:35.634698  639302 system_pods.go:89] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:35.634702  639302 system_pods.go:89] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:35.634706  639302 system_pods.go:89] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:35.634720  639302 system_pods.go:89] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:35.634724  639302 system_pods.go:89] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:35.634735  639302 system_pods.go:89] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:20:35.634756  639302 retry.go:31] will retry after 357.661569ms: missing components: kube-dns
	I1013 23:20:35.996643  639302 system_pods.go:86] 8 kube-system pods found
	I1013 23:20:35.996683  639302 system_pods.go:89] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:20:35.996691  639302 system_pods.go:89] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:35.996698  639302 system_pods.go:89] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:35.996703  639302 system_pods.go:89] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:35.996707  639302 system_pods.go:89] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:35.996712  639302 system_pods.go:89] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:35.996716  639302 system_pods.go:89] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:35.996722  639302 system_pods.go:89] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:20:35.996743  639302 retry.go:31] will retry after 305.740238ms: missing components: kube-dns
	I1013 23:20:36.309467  639302 system_pods.go:86] 8 kube-system pods found
	I1013 23:20:36.309511  639302 system_pods.go:89] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:20:36.309519  639302 system_pods.go:89] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:36.309526  639302 system_pods.go:89] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:36.309530  639302 system_pods.go:89] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:36.309534  639302 system_pods.go:89] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:36.309539  639302 system_pods.go:89] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:36.309543  639302 system_pods.go:89] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:36.309548  639302 system_pods.go:89] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:20:36.309564  639302 retry.go:31] will retry after 454.04081ms: missing components: kube-dns
	I1013 23:20:36.767652  639302 system_pods.go:86] 8 kube-system pods found
	I1013 23:20:36.767689  639302 system_pods.go:89] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Running
	I1013 23:20:36.767696  639302 system_pods.go:89] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:36.767700  639302 system_pods.go:89] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:36.767705  639302 system_pods.go:89] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:36.767709  639302 system_pods.go:89] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:36.767747  639302 system_pods.go:89] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:36.767759  639302 system_pods.go:89] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:36.767765  639302 system_pods.go:89] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Running
	I1013 23:20:36.767774  639302 system_pods.go:126] duration metric: took 1.357491783s to wait for k8s-apps to be running ...
	I1013 23:20:36.767786  639302 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:20:36.767854  639302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:20:36.781759  639302 system_svc.go:56] duration metric: took 13.961881ms WaitForService to wait for kubelet
	I1013 23:20:36.781786  639302 kubeadm.go:586] duration metric: took 43.344520207s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:20:36.781814  639302 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:20:36.785460  639302 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:20:36.785497  639302 node_conditions.go:123] node cpu capacity is 2
	I1013 23:20:36.785513  639302 node_conditions.go:105] duration metric: took 3.694007ms to run NodePressure ...
	I1013 23:20:36.785526  639302 start.go:241] waiting for startup goroutines ...
	I1013 23:20:36.785536  639302 start.go:246] waiting for cluster config update ...
	I1013 23:20:36.785548  639302 start.go:255] writing updated cluster config ...
	I1013 23:20:36.785882  639302 ssh_runner.go:195] Run: rm -f paused
	I1013 23:20:36.789650  639302 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:20:36.793393  639302 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-74t9m" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.798604  639302 pod_ready.go:94] pod "coredns-66bc5c9577-74t9m" is "Ready"
	I1013 23:20:36.798633  639302 pod_ready.go:86] duration metric: took 5.214921ms for pod "coredns-66bc5c9577-74t9m" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.801312  639302 pod_ready.go:83] waiting for pod "etcd-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.806548  639302 pod_ready.go:94] pod "etcd-auto-557095" is "Ready"
	I1013 23:20:36.806578  639302 pod_ready.go:86] duration metric: took 5.241521ms for pod "etcd-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.809227  639302 pod_ready.go:83] waiting for pod "kube-apiserver-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.814381  639302 pod_ready.go:94] pod "kube-apiserver-auto-557095" is "Ready"
	I1013 23:20:36.814414  639302 pod_ready.go:86] duration metric: took 5.162417ms for pod "kube-apiserver-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.817085  639302 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:37.194222  639302 pod_ready.go:94] pod "kube-controller-manager-auto-557095" is "Ready"
	I1013 23:20:37.194252  639302 pod_ready.go:86] duration metric: took 377.138693ms for pod "kube-controller-manager-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:37.397257  639302 pod_ready.go:83] waiting for pod "kube-proxy-2hnwf" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:37.794260  639302 pod_ready.go:94] pod "kube-proxy-2hnwf" is "Ready"
	I1013 23:20:37.794338  639302 pod_ready.go:86] duration metric: took 397.044859ms for pod "kube-proxy-2hnwf" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:37.993774  639302 pod_ready.go:83] waiting for pod "kube-scheduler-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:38.396070  639302 pod_ready.go:94] pod "kube-scheduler-auto-557095" is "Ready"
	I1013 23:20:38.396146  639302 pod_ready.go:86] duration metric: took 402.33858ms for pod "kube-scheduler-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:38.396171  639302 pod_ready.go:40] duration metric: took 1.606487824s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:20:38.451253  639302 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:20:38.454336  639302 out.go:179] * Done! kubectl is now configured to use "auto-557095" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 23:20:14 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:14.088351243Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:20:32 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:32.856279343Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=307e3bb8-2169-412c-a93a-bdc84bfcf990 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.058530313Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bf279553-638e-44a0-9539-7d53f323d396 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.105996007Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv/dashboard-metrics-scraper" id=7f712469-04ca-4b3d-ae71-1cd9a56db2ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.106289039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.115762878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.116608326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.147824651Z" level=info msg="Created container 56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv/dashboard-metrics-scraper" id=7f712469-04ca-4b3d-ae71-1cd9a56db2ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.149009777Z" level=info msg="Starting container: 56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b" id=b18c7846-4c96-4d1a-87dc-7a0fa81d54ec name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.150905329Z" level=info msg="Started container" PID=1709 containerID=56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv/dashboard-metrics-scraper id=b18c7846-4c96-4d1a-87dc-7a0fa81d54ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=9cc108d4284eb40a5d50319faed914fc495d8e759cb4e474538acaf5a3ec28be
	Oct 13 23:20:33 default-k8s-diff-port-033746 conmon[1706]: conmon 56457140a6afa533157c <ninfo>: container 1709 exited with status 1
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.220040059Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf" id=e66ed8c1-80fd-4bf0-8e63-e1b81a70485f name=/runtime.v1.ImageService/PullImage
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.220737595Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=23443f62-f96f-4b47-83ed-3e6500be003a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.223508453Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a657fccc-c56e-45bc-8c1d-7035e97bd082 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.232275648Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gck5m/kubernetes-dashboard" id=d013c055-41eb-4913-ad45-148933bb446b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.233142864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.238266717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.238629065Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f2845502e3c5154a9b6d23541058b2b90529d18ee9f638f710165f2a4722edda/merged/etc/group: no such file or directory"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.239239186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.267455932Z" level=info msg="Created container 224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gck5m/kubernetes-dashboard" id=d013c055-41eb-4913-ad45-148933bb446b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.271900939Z" level=info msg="Starting container: 224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a" id=c88dea41-bf43-4762-8597-4b15b73a1c1b name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.272962711Z" level=info msg="Removing container: 6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135" id=e035693e-3b8d-45a7-bfd5-08529d76e28a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.284901798Z" level=info msg="Started container" PID=1719 containerID=224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gck5m/kubernetes-dashboard id=c88dea41-bf43-4762-8597-4b15b73a1c1b name=/runtime.v1.RuntimeService/StartContainer sandboxID=67ef5446bd7f6d397cdc0e57f60668334b65e09d18f635586e7c008d1c284d6e
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.29323495Z" level=info msg="Error loading conmon cgroup of container 6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135: cgroup deleted" id=e035693e-3b8d-45a7-bfd5-08529d76e28a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.299524532Z" level=info msg="Removed container 6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv/dashboard-metrics-scraper" id=e035693e-3b8d-45a7-bfd5-08529d76e28a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	224275fc9d0a4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   15 seconds ago       Running             kubernetes-dashboard        0                   67ef5446bd7f6       kubernetes-dashboard-855c9754f9-gck5m                  kubernetes-dashboard
	56457140a6afa       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago       Exited              dashboard-metrics-scraper   3                   9cc108d4284eb       dashboard-metrics-scraper-6ffb444bf9-gmddv             kubernetes-dashboard
	5c555d4efff48       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           42 seconds ago       Running             storage-provisioner         2                   e155be53020ed       storage-provisioner                                    kube-system
	07ae824f8dd13       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   5bc545837bce4       coredns-66bc5c9577-qf4lq                               kube-system
	2744854e183c6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   e155be53020ed       storage-provisioner                                    kube-system
	00a0781d38ad3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   a647b1fc2f70e       busybox                                                default
	f817315f7da05       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   d59e813082648       kube-proxy-mxnv7                                       kube-system
	627054f4b8711       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   ae8465e3d2c8b       kindnet-vgn6v                                          kube-system
	066ad3d69ea84       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   00b8ffcd9e676       etcd-default-k8s-diff-port-033746                      kube-system
	cab38f78f2c2f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   be98ef070aa07       kube-controller-manager-default-k8s-diff-port-033746   kube-system
	3f7f4bc1a19c7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   07fe84a0db6b9       kube-apiserver-default-k8s-diff-port-033746            kube-system
	4e7274aa96669       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0e72edea0e13a       kube-scheduler-default-k8s-diff-port-033746            kube-system
	
	
	==> coredns [07ae824f8dd13988631a49a5321f83059aa5d43e097358a27639066d210ec4c1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38718 - 11337 "HINFO IN 7413653779595176445.904703530762834880. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.026886932s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-033746
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-033746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=default-k8s-diff-port-033746
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_17_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-033746
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:20:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:20:23 +0000   Mon, 13 Oct 2025 23:17:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:20:23 +0000   Mon, 13 Oct 2025 23:17:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:20:23 +0000   Mon, 13 Oct 2025 23:17:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:20:23 +0000   Mon, 13 Oct 2025 23:18:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-033746
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                b334b9dc-cabb-43d9-9bf2-cf916bb499bf
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 coredns-66bc5c9577-qf4lq                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m44s
	  kube-system                 etcd-default-k8s-diff-port-033746                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m50s
	  kube-system                 kindnet-vgn6v                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m45s
	  kube-system                 kube-apiserver-default-k8s-diff-port-033746             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-033746    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m50s
	  kube-system                 kube-proxy-mxnv7                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  kube-system                 kube-scheduler-default-k8s-diff-port-033746             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gmddv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gck5m                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m42s                kube-proxy       
	  Normal   Starting                 71s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  3m1s (x8 over 3m1s)  kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m1s (x8 over 3m1s)  kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m1s (x8 over 3m1s)  kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m50s                kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m50s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m50s                kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m50s                kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m50s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m45s                node-controller  Node default-k8s-diff-port-033746 event: Registered Node default-k8s-diff-port-033746 in Controller
	  Normal   NodeReady                2m3s                 kubelet          Node default-k8s-diff-port-033746 status is now: NodeReady
	  Normal   Starting                 85s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 85s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s (x8 over 85s)    kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s (x8 over 85s)    kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s (x8 over 85s)    kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           73s                  node-controller  Node default-k8s-diff-port-033746 event: Registered Node default-k8s-diff-port-033746 in Controller
	
	
	==> dmesg <==
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	[Oct13 23:16] overlayfs: idmapped layers are currently not supported
	[ +36.293322] overlayfs: idmapped layers are currently not supported
	[Oct13 23:17] overlayfs: idmapped layers are currently not supported
	[Oct13 23:18] overlayfs: idmapped layers are currently not supported
	[ +26.588739] overlayfs: idmapped layers are currently not supported
	[Oct13 23:19] overlayfs: idmapped layers are currently not supported
	[ +12.709304] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [066ad3d69ea84808c078d93b1f6265cfd21518d17a5db054d1b69f87ca56e952] <==
	{"level":"warn","ts":"2025-10-13T23:19:29.605995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.635281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.674216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.730976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.749802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.811325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.831704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.944047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.946723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.998885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.093797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.167248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.209891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.275466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.315866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.379754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.380580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.417377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.439843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.471444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.500147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.532500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.571171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.596339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.695156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59372","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:20:49 up  3:03,  0 user,  load average: 2.79, 3.60, 2.97
	Linux default-k8s-diff-port-033746 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [627054f4b8711bf5c68f79b3ba67430e516c8873d1bc2dac09c6d20b34208388] <==
	I1013 23:19:33.703483       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:19:33.703666       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:19:33.703680       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:19:33.703692       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:19:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:19:34.076721       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:19:34.076751       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:19:34.076761       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:19:34.077090       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:20:04.068893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:20:04.077509       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 23:20:04.077627       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 23:20:04.077715       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1013 23:20:05.577662       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:20:05.577690       1 metrics.go:72] Registering metrics
	I1013 23:20:05.577756       1 controller.go:711] "Syncing nftables rules"
	I1013 23:20:14.072495       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:20:14.072616       1 main.go:301] handling current node
	I1013 23:20:24.067764       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:20:24.067796       1 main.go:301] handling current node
	I1013 23:20:34.069167       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:20:34.069201       1 main.go:301] handling current node
	I1013 23:20:44.072518       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:20:44.072555       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3f7f4bc1a19c7b8ca9e580a8effb1d745cb76de4a5ab7542321977f3bf56b636] <==
	I1013 23:19:32.092331       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 23:19:32.115945       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:19:32.235431       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 23:19:32.244563       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 23:19:32.244593       1 policy_source.go:240] refreshing policies
	I1013 23:19:32.287703       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:19:32.289016       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 23:19:32.289370       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 23:19:32.290193       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 23:19:32.290207       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 23:19:32.305559       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 23:19:32.375236       1 cache.go:39] Caches are synced for autoregister controller
	I1013 23:19:32.379989       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 23:19:32.463459       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:19:32.860122       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:19:32.917696       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:19:34.983689       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 23:19:35.250299       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:19:35.395975       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:19:35.453902       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:19:35.802221       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.18.251"}
	I1013 23:19:35.834893       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.34.19"}
	I1013 23:19:37.020362       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:19:37.122033       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:19:37.167796       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [cab38f78f2c2f085857d4a3efa0373a4a503447eebfd8334b6524ca0ec415a07] <==
	I1013 23:19:36.799137       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 23:19:36.799264       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 23:19:36.805300       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 23:19:36.806519       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:19:36.811139       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 23:19:36.815633       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:19:36.815722       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 23:19:36.819151       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 23:19:36.823205       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 23:19:36.823311       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 23:19:36.824034       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:19:36.824100       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 23:19:36.831176       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 23:19:36.831452       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 23:19:36.839205       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:19:36.839396       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 23:19:36.839531       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 23:19:36.839573       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 23:19:36.842529       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 23:19:36.854088       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 23:19:36.854215       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:19:36.960098       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:19:36.981919       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:19:36.982022       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 23:19:36.982054       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [f817315f7da05cc291c73bfaf16bad680cb70bb5ff043f18fa59f7ada7fb3215] <==
	I1013 23:19:36.970438       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:19:37.516947       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:19:37.617696       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:19:37.617820       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 23:19:37.617924       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:19:37.655292       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:19:37.655363       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:19:37.659663       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:19:37.660015       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:19:37.660084       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:19:37.661409       1 config.go:200] "Starting service config controller"
	I1013 23:19:37.661502       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:19:37.661560       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:19:37.661612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:19:37.661653       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:19:37.661682       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:19:37.662341       1 config.go:309] "Starting node config controller"
	I1013 23:19:37.662402       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:19:37.662434       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:19:37.761769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:19:37.761811       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:19:37.761854       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4e7274aa9666913e174875ca758f5279a206c60e735c947c6cd3cf7e67e99d2b] <==
	I1013 23:19:31.864259       1 serving.go:386] Generated self-signed cert in-memory
	I1013 23:19:37.068889       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 23:19:37.068995       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:19:37.074809       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:19:37.075265       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 23:19:37.075343       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 23:19:37.075425       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 23:19:37.084794       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:19:37.089468       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:19:37.089582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:19:37.089623       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:19:37.178046       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 23:19:37.191210       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:19:37.191351       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:19:46 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:46.113632     784 scope.go:117] "RemoveContainer" containerID="79a0ab0cea63aaed6ca2f9e3a2307fbba0b9100905df937fd05be620b3415db1"
	Oct 13 23:19:46 default-k8s-diff-port-033746 kubelet[784]: E1013 23:19:46.113784     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:19:47 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:47.117611     784 scope.go:117] "RemoveContainer" containerID="79a0ab0cea63aaed6ca2f9e3a2307fbba0b9100905df937fd05be620b3415db1"
	Oct 13 23:19:47 default-k8s-diff-port-033746 kubelet[784]: E1013 23:19:47.117775     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:19:48 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:48.125260     784 scope.go:117] "RemoveContainer" containerID="79a0ab0cea63aaed6ca2f9e3a2307fbba0b9100905df937fd05be620b3415db1"
	Oct 13 23:19:48 default-k8s-diff-port-033746 kubelet[784]: E1013 23:19:48.125463     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:19:58 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:58.855189     784 scope.go:117] "RemoveContainer" containerID="79a0ab0cea63aaed6ca2f9e3a2307fbba0b9100905df937fd05be620b3415db1"
	Oct 13 23:19:59 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:59.158425     784 scope.go:117] "RemoveContainer" containerID="79a0ab0cea63aaed6ca2f9e3a2307fbba0b9100905df937fd05be620b3415db1"
	Oct 13 23:19:59 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:59.158641     784 scope.go:117] "RemoveContainer" containerID="6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135"
	Oct 13 23:19:59 default-k8s-diff-port-033746 kubelet[784]: E1013 23:19:59.158813     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:20:06 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:06.183907     784 scope.go:117] "RemoveContainer" containerID="2744854e183c6d04900672bd669f244f681e44096d61da7ce2a00ed165ae9394"
	Oct 13 23:20:07 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:07.708958     784 scope.go:117] "RemoveContainer" containerID="6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135"
	Oct 13 23:20:07 default-k8s-diff-port-033746 kubelet[784]: E1013 23:20:07.709125     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:20:18 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:18.855522     784 scope.go:117] "RemoveContainer" containerID="6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135"
	Oct 13 23:20:18 default-k8s-diff-port-033746 kubelet[784]: E1013 23:20:18.855781     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:20:32 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:32.855354     784 scope.go:117] "RemoveContainer" containerID="6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135"
	Oct 13 23:20:33 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:33.256137     784 scope.go:117] "RemoveContainer" containerID="6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135"
	Oct 13 23:20:33 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:33.256471     784 scope.go:117] "RemoveContainer" containerID="56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b"
	Oct 13 23:20:33 default-k8s-diff-port-033746 kubelet[784]: E1013 23:20:33.256623     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:20:37 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:37.708817     784 scope.go:117] "RemoveContainer" containerID="56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b"
	Oct 13 23:20:37 default-k8s-diff-port-033746 kubelet[784]: E1013 23:20:37.709511     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:20:37 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:37.724999     784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gck5m" podStartSLOduration=5.2720836030000005 podStartE2EDuration="1m0.7249799s" podCreationTimestamp="2025-10-13 23:19:37 +0000 UTC" firstStartedPulling="2025-10-13 23:19:37.769419893 +0000 UTC m=+13.185110335" lastFinishedPulling="2025-10-13 23:20:33.22231619 +0000 UTC m=+68.638006632" observedRunningTime="2025-10-13 23:20:34.274546172 +0000 UTC m=+69.690236614" watchObservedRunningTime="2025-10-13 23:20:37.7249799 +0000 UTC m=+73.140670350"
	Oct 13 23:20:46 default-k8s-diff-port-033746 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:20:46 default-k8s-diff-port-033746 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:20:46 default-k8s-diff-port-033746 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a] <==
	2025/10/13 23:20:33 Using namespace: kubernetes-dashboard
	2025/10/13 23:20:33 Using in-cluster config to connect to apiserver
	2025/10/13 23:20:33 Using secret token for csrf signing
	2025/10/13 23:20:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 23:20:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 23:20:33 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 23:20:33 Generating JWE encryption key
	2025/10/13 23:20:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 23:20:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 23:20:34 Initializing JWE encryption key from synchronized object
	2025/10/13 23:20:34 Creating in-cluster Sidecar client
	2025/10/13 23:20:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 23:20:34 Serving insecurely on HTTP port: 9090
	2025/10/13 23:20:33 Starting overwatch
	
	
	==> storage-provisioner [2744854e183c6d04900672bd669f244f681e44096d61da7ce2a00ed165ae9394] <==
	I1013 23:19:35.423174       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 23:20:05.967299       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5c555d4efff48ea336088981b3246ac8d7f5cb5d4c6d286df5c7bd6fba460d35] <==
	W1013 23:20:25.680506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:27.683990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:27.688949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:29.692267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:29.698246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:31.702775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:31.714902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:33.717770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:33.722796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:35.725738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:35.730495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:37.737815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:37.743341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:39.746400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:39.754150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:41.757436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:41.762687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:43.765546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:43.772504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:45.776264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:45.789712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:47.793058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:47.801110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:49.804462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:49.810094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746
E1013 23:20:50.071692  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746: exit status 2 (527.286776ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-033746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-033746
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-033746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090",
	        "Created": "2025-10-13T23:17:28.705422027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 639914,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T23:19:17.508813231Z",
	            "FinishedAt": "2025-10-13T23:19:14.635369501Z"
	        },
	        "Image": "sha256:2fa205e27ddde4d6f4ea12f275ceb14f4fc2732501715e54e8667f8f637e51a1",
	        "ResolvConfPath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/hostname",
	        "HostsPath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/hosts",
	        "LogPath": "/var/lib/docker/containers/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090/278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090-json.log",
	        "Name": "/default-k8s-diff-port-033746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-033746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-033746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "278dbdd59e84ffb8951ec6dd14dd70b247765ff6e03352c0ba78c6edbab30090",
	                "LowerDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2-init/diff:/var/lib/docker/overlay2/583b3976590c94cec17256ccbb36b53a93cc5ff96af263a14525cfd34670b3e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47086d890cf5ed73bcdc38e56a784b112144ff6f6a1daadf2f65cfeaa76880e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-033746",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-033746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-033746",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-033746",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-033746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "370dba46f787089ceb1fb58dbb2fafcff0981c8936e651811a17b4056269b265",
	            "SandboxKey": "/var/run/docker/netns/370dba46f787",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-033746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:2a:ca:aa:fd:7b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8549f6a07be41a945dcb145bb71d1b75a39e75ddc68f75d19380e8800e056e42",
	                    "EndpointID": "7122e7e7ee671940dd54a8f5f6b6d601a2a1b1e3d09a723ff675629ccc79bc22",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-033746",
	                        "278dbdd59e84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746: exit status 2 (490.003551ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-033746 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-033746 logs -n 25: (1.310646307s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p no-preload-985461                                                                                                                                                                                                                          │ no-preload-985461            │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ delete  │ -p disable-driver-mounts-320520                                                                                                                                                                                                               │ disable-driver-mounts-320520 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ start   │ -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:18 UTC │
	│ image   │ embed-certs-505482 image list --format=json                                                                                                                                                                                                   │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:17 UTC │
	│ pause   │ -p embed-certs-505482 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │                     │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:17 UTC │ 13 Oct 25 23:18 UTC │
	│ delete  │ -p embed-certs-505482                                                                                                                                                                                                                         │ embed-certs-505482           │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ start   │ -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ addons  │ enable metrics-server -p newest-cni-041709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	│ stop    │ -p newest-cni-041709 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-041709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:18 UTC │
	│ start   │ -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │ 13 Oct 25 23:19 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-033746 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-033746 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ image   │ newest-cni-041709 image list --format=json                                                                                                                                                                                                    │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ pause   │ -p newest-cni-041709 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │                     │
	│ delete  │ -p newest-cni-041709                                                                                                                                                                                                                          │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ delete  │ -p newest-cni-041709                                                                                                                                                                                                                          │ newest-cni-041709            │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ start   │ -p auto-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-557095                  │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:20 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-033746 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:19 UTC │
	│ start   │ -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:19 UTC │ 13 Oct 25 23:20 UTC │
	│ ssh     │ -p auto-557095 pgrep -a kubelet                                                                                                                                                                                                               │ auto-557095                  │ jenkins │ v1.37.0 │ 13 Oct 25 23:20 UTC │ 13 Oct 25 23:20 UTC │
	│ image   │ default-k8s-diff-port-033746 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:20 UTC │ 13 Oct 25 23:20 UTC │
	│ pause   │ -p default-k8s-diff-port-033746 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-033746 │ jenkins │ v1.37.0 │ 13 Oct 25 23:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 23:19:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 23:19:16.927489  639746 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:19:16.927638  639746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:19:16.927662  639746 out.go:374] Setting ErrFile to fd 2...
	I1013 23:19:16.927685  639746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:19:16.927975  639746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:19:16.928382  639746 out.go:368] Setting JSON to false
	I1013 23:19:16.929264  639746 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10893,"bootTime":1760386664,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:19:16.929330  639746 start.go:141] virtualization:  
	I1013 23:19:16.973359  639746 out.go:179] * [default-k8s-diff-port-033746] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:19:17.016525  639746 notify.go:220] Checking for updates...
	I1013 23:19:17.016541  639746 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:19:17.049541  639746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:19:17.086005  639746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:19:17.110690  639746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:19:17.149001  639746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:19:17.167262  639746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:19:17.200110  639746 config.go:182] Loaded profile config "default-k8s-diff-port-033746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:19:17.200727  639746 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:19:17.222584  639746 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:19:17.222716  639746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:19:17.280630  639746 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-13 23:19:17.271316572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:19:17.280745  639746 docker.go:318] overlay module found
	I1013 23:19:17.290418  639746 out.go:179] * Using the docker driver based on existing profile
	I1013 23:19:17.316716  639746 start.go:305] selected driver: docker
	I1013 23:19:17.316741  639746 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:19:17.316856  639746 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:19:17.317539  639746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:19:17.375641  639746 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-13 23:19:17.365556256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:19:17.376024  639746 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:19:17.376064  639746 cni.go:84] Creating CNI manager for ""
	I1013 23:19:17.376124  639746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:19:17.376164  639746 start.go:349] cluster config:
	{Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:19:17.385073  639746 out.go:179] * Starting "default-k8s-diff-port-033746" primary control-plane node in "default-k8s-diff-port-033746" cluster
	I1013 23:19:17.388802  639746 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 23:19:17.394382  639746 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1013 23:19:17.398035  639746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:19:17.398100  639746 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 23:19:17.398133  639746 cache.go:58] Caching tarball of preloaded images
	I1013 23:19:17.398159  639746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 23:19:17.398228  639746 preload.go:233] Found /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 23:19:17.398239  639746 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 23:19:17.398352  639746 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/config.json ...
	I1013 23:19:17.423507  639746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1013 23:19:17.423546  639746 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1013 23:19:17.423568  639746 cache.go:232] Successfully downloaded all kic artifacts
	I1013 23:19:17.423615  639746 start.go:360] acquireMachinesLock for default-k8s-diff-port-033746: {Name:mk4950372c3cd6b03a758b4772e5c43a69d20962 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 23:19:17.423692  639746 start.go:364] duration metric: took 56.319µs to acquireMachinesLock for "default-k8s-diff-port-033746"
	I1013 23:19:17.423715  639746 start.go:96] Skipping create...Using existing machine configuration
	I1013 23:19:17.423727  639746 fix.go:54] fixHost starting: 
	I1013 23:19:17.424096  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:17.445794  639746 fix.go:112] recreateIfNeeded on default-k8s-diff-port-033746: state=Stopped err=<nil>
	W1013 23:19:17.445822  639746 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 23:19:12.921071  639302 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 23:19:12.921301  639302 start.go:159] libmachine.API.Create for "auto-557095" (driver="docker")
	I1013 23:19:12.921358  639302 client.go:168] LocalClient.Create starting
	I1013 23:19:12.921428  639302 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem
	I1013 23:19:12.921464  639302 main.go:141] libmachine: Decoding PEM data...
	I1013 23:19:12.921480  639302 main.go:141] libmachine: Parsing certificate...
	I1013 23:19:12.921538  639302 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem
	I1013 23:19:12.921571  639302 main.go:141] libmachine: Decoding PEM data...
	I1013 23:19:12.921585  639302 main.go:141] libmachine: Parsing certificate...
	I1013 23:19:12.921963  639302 cli_runner.go:164] Run: docker network inspect auto-557095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 23:19:12.937558  639302 cli_runner.go:211] docker network inspect auto-557095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 23:19:12.937651  639302 network_create.go:284] running [docker network inspect auto-557095] to gather additional debugging logs...
	I1013 23:19:12.937673  639302 cli_runner.go:164] Run: docker network inspect auto-557095
	W1013 23:19:12.951481  639302 cli_runner.go:211] docker network inspect auto-557095 returned with exit code 1
	I1013 23:19:12.951514  639302 network_create.go:287] error running [docker network inspect auto-557095]: docker network inspect auto-557095: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-557095 not found
	I1013 23:19:12.951539  639302 network_create.go:289] output of [docker network inspect auto-557095]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-557095 not found
	
	** /stderr **
	I1013 23:19:12.951627  639302 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:19:12.968675  639302 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-daf8f67114ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:2a:b3:49:6d:63} reservation:<nil>}
	I1013 23:19:12.968875  639302 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-57d99f1e9609 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:17:72:4c:c8:ba} reservation:<nil>}
	I1013 23:19:12.969160  639302 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-33ec4a6ec514 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:b6:7d:bc:fd} reservation:<nil>}
	I1013 23:19:12.969572  639302 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a08f90}
	I1013 23:19:12.969596  639302 network_create.go:124] attempt to create docker network auto-557095 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 23:19:12.969653  639302 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-557095 auto-557095
	I1013 23:19:13.032590  639302 network_create.go:108] docker network auto-557095 192.168.76.0/24 created
	I1013 23:19:13.032623  639302 kic.go:121] calculated static IP "192.168.76.2" for the "auto-557095" container
	I1013 23:19:13.032699  639302 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 23:19:13.049233  639302 cli_runner.go:164] Run: docker volume create auto-557095 --label name.minikube.sigs.k8s.io=auto-557095 --label created_by.minikube.sigs.k8s.io=true
	I1013 23:19:13.067324  639302 oci.go:103] Successfully created a docker volume auto-557095
	I1013 23:19:13.067414  639302 cli_runner.go:164] Run: docker run --rm --name auto-557095-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-557095 --entrypoint /usr/bin/test -v auto-557095:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1013 23:19:13.578758  639302 oci.go:107] Successfully prepared a docker volume auto-557095
	I1013 23:19:13.578809  639302 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:19:13.578830  639302 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 23:19:13.578906  639302 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-557095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 23:19:17.427454  639302 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-557095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (3.848509371s)
	I1013 23:19:17.427493  639302 kic.go:203] duration metric: took 3.848660113s to extract preloaded images to volume ...
	W1013 23:19:17.427623  639302 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 23:19:17.427737  639302 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 23:19:17.488625  639302 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-557095 --name auto-557095 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-557095 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-557095 --network auto-557095 --ip 192.168.76.2 --volume auto-557095:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1013 23:19:17.449818  639746 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-033746" ...
	I1013 23:19:17.449915  639746 cli_runner.go:164] Run: docker start default-k8s-diff-port-033746
	I1013 23:19:17.816321  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:17.843928  639746 kic.go:430] container "default-k8s-diff-port-033746" state is running.
	I1013 23:19:17.846520  639746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:19:17.873626  639746 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/config.json ...
	I1013 23:19:17.873851  639746 machine.go:93] provisionDockerMachine start ...
	I1013 23:19:17.873909  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:17.908962  639746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:17.909277  639746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1013 23:19:17.909286  639746 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:19:17.909887  639746 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59696->127.0.0.1:33489: read: connection reset by peer
	I1013 23:19:21.054703  639746 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-033746
	
	I1013 23:19:21.054728  639746 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-033746"
	I1013 23:19:21.054801  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:21.072469  639746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:21.072789  639746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1013 23:19:21.072809  639746 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-033746 && echo "default-k8s-diff-port-033746" | sudo tee /etc/hostname
	I1013 23:19:21.228269  639746 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-033746
	
	I1013 23:19:21.228358  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:21.246504  639746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:21.246855  639746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1013 23:19:21.246875  639746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-033746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-033746/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-033746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:19:21.395363  639746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:19:21.395388  639746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:19:21.395414  639746 ubuntu.go:190] setting up certificates
	I1013 23:19:21.395425  639746 provision.go:84] configureAuth start
	I1013 23:19:21.395493  639746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:19:21.411973  639746 provision.go:143] copyHostCerts
	I1013 23:19:21.412055  639746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:19:21.412077  639746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:19:21.412157  639746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:19:21.412262  639746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:19:21.412272  639746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:19:21.412300  639746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:19:21.412366  639746 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:19:21.412380  639746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:19:21.412406  639746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:19:21.412468  639746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-033746 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-033746 localhost minikube]
	I1013 23:19:21.622522  639746 provision.go:177] copyRemoteCerts
	I1013 23:19:21.622594  639746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:19:21.622640  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:21.639445  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:21.743007  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:19:21.760509  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 23:19:21.778144  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:19:21.795704  639746 provision.go:87] duration metric: took 400.261571ms to configureAuth
	I1013 23:19:21.795729  639746 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:19:21.795919  639746 config.go:182] Loaded profile config "default-k8s-diff-port-033746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:19:21.796051  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:21.813097  639746 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:21.813397  639746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1013 23:19:21.813419  639746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:19:17.970467  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Running}}
	I1013 23:19:17.993111  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:18.027674  639302 cli_runner.go:164] Run: docker exec auto-557095 stat /var/lib/dpkg/alternatives/iptables
	I1013 23:19:18.092554  639302 oci.go:144] the created container "auto-557095" has a running status.
	I1013 23:19:18.092592  639302 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa...
	I1013 23:19:18.782917  639302 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 23:19:18.820597  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:18.845301  639302 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 23:19:18.845320  639302 kic_runner.go:114] Args: [docker exec --privileged auto-557095 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 23:19:18.910229  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:18.936511  639302 machine.go:93] provisionDockerMachine start ...
	I1013 23:19:18.936600  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:18.968705  639302 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:18.969040  639302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1013 23:19:18.969049  639302 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 23:19:18.969831  639302 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56242->127.0.0.1:33494: read: connection reset by peer
	I1013 23:19:22.122952  639302 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-557095
	
	I1013 23:19:22.123023  639302 ubuntu.go:182] provisioning hostname "auto-557095"
	I1013 23:19:22.123174  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:22.143020  639302 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:22.143353  639302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1013 23:19:22.143366  639302 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-557095 && echo "auto-557095" | sudo tee /etc/hostname
	I1013 23:19:22.324314  639302 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-557095
	
	I1013 23:19:22.324405  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:22.348716  639302 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:22.349037  639302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1013 23:19:22.349060  639302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-557095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-557095/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-557095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 23:19:22.514104  639302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 23:19:22.514154  639302 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-428797/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-428797/.minikube}
	I1013 23:19:22.514180  639302 ubuntu.go:190] setting up certificates
	I1013 23:19:22.514190  639302 provision.go:84] configureAuth start
	I1013 23:19:22.514255  639302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-557095
	I1013 23:19:22.559729  639302 provision.go:143] copyHostCerts
	I1013 23:19:22.559804  639302 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem, removing ...
	I1013 23:19:22.559817  639302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem
	I1013 23:19:22.559889  639302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/ca.pem (1082 bytes)
	I1013 23:19:22.559989  639302 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem, removing ...
	I1013 23:19:22.560002  639302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem
	I1013 23:19:22.560026  639302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/cert.pem (1123 bytes)
	I1013 23:19:22.560082  639302 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem, removing ...
	I1013 23:19:22.560090  639302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem
	I1013 23:19:22.560111  639302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-428797/.minikube/key.pem (1679 bytes)
	I1013 23:19:22.560165  639302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem org=jenkins.auto-557095 san=[127.0.0.1 192.168.76.2 auto-557095 localhost minikube]
	I1013 23:19:22.164048  639746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:19:22.164068  639746 machine.go:96] duration metric: took 4.290208164s to provisionDockerMachine
	I1013 23:19:22.164079  639746 start.go:293] postStartSetup for "default-k8s-diff-port-033746" (driver="docker")
	I1013 23:19:22.164090  639746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:19:22.164171  639746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:19:22.164214  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:22.191277  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:22.307816  639746 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:19:22.311521  639746 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:19:22.311550  639746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:19:22.311561  639746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:19:22.311614  639746 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:19:22.311695  639746 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:19:22.311796  639746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:19:22.319531  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:19:22.345089  639746 start.go:296] duration metric: took 180.993956ms for postStartSetup
	I1013 23:19:22.345185  639746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:19:22.345242  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:22.370373  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:22.480322  639746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:19:22.485510  639746 fix.go:56] duration metric: took 5.06178212s for fixHost
	I1013 23:19:22.485534  639746 start.go:83] releasing machines lock for "default-k8s-diff-port-033746", held for 5.061831416s
	I1013 23:19:22.485607  639746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-033746
	I1013 23:19:22.502390  639746 ssh_runner.go:195] Run: cat /version.json
	I1013 23:19:22.502442  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:22.502482  639746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:19:22.502535  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:22.528273  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:22.529497  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:22.639355  639746 ssh_runner.go:195] Run: systemctl --version
	I1013 23:19:22.735670  639746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:19:22.789577  639746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:19:22.794199  639746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:19:22.794264  639746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:19:22.802705  639746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 23:19:22.802726  639746 start.go:495] detecting cgroup driver to use...
	I1013 23:19:22.802757  639746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:19:22.802802  639746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:19:22.819801  639746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:19:22.834444  639746 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:19:22.834505  639746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:19:22.851224  639746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:19:22.865940  639746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:19:23.046289  639746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:19:23.195146  639746 docker.go:234] disabling docker service ...
	I1013 23:19:23.195224  639746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:19:23.211185  639746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:19:23.226839  639746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:19:23.357051  639746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:19:23.524439  639746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:19:23.538480  639746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:19:23.555973  639746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:19:23.556044  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.565978  639746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:19:23.566059  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.575304  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.583990  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.592931  639746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:19:23.601082  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.609946  639746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.621062  639746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:23.632390  639746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:19:23.642116  639746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:19:23.650402  639746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:23.801124  639746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:19:23.975207  639746 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:19:23.975273  639746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:19:23.979997  639746 start.go:563] Will wait 60s for crictl version
	I1013 23:19:23.980073  639746 ssh_runner.go:195] Run: which crictl
	I1013 23:19:23.983783  639746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:19:24.024549  639746 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:19:24.024642  639746 ssh_runner.go:195] Run: crio --version
	I1013 23:19:24.059374  639746 ssh_runner.go:195] Run: crio --version
	I1013 23:19:24.100092  639746 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:19:22.935263  639302 provision.go:177] copyRemoteCerts
	I1013 23:19:22.935331  639302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 23:19:22.935380  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:22.977178  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:23.079333  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 23:19:23.102520  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 23:19:23.141483  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 23:19:23.161989  639302 provision.go:87] duration metric: took 647.777406ms to configureAuth
	I1013 23:19:23.162066  639302 ubuntu.go:206] setting minikube options for container-runtime
	I1013 23:19:23.162303  639302 config.go:182] Loaded profile config "auto-557095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:19:23.162452  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:23.181157  639302 main.go:141] libmachine: Using SSH client type: native
	I1013 23:19:23.181454  639302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1013 23:19:23.181468  639302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 23:19:23.490519  639302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 23:19:23.490545  639302 machine.go:96] duration metric: took 4.554014075s to provisionDockerMachine
	I1013 23:19:23.490574  639302 client.go:171] duration metric: took 10.569188867s to LocalClient.Create
	I1013 23:19:23.490588  639302 start.go:167] duration metric: took 10.569289353s to libmachine.API.Create "auto-557095"
	I1013 23:19:23.490596  639302 start.go:293] postStartSetup for "auto-557095" (driver="docker")
	I1013 23:19:23.490609  639302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 23:19:23.490682  639302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 23:19:23.490728  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:23.515474  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:23.624070  639302 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 23:19:23.628150  639302 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 23:19:23.628179  639302 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 23:19:23.628190  639302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/addons for local assets ...
	I1013 23:19:23.628241  639302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-428797/.minikube/files for local assets ...
	I1013 23:19:23.628320  639302 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem -> 4306522.pem in /etc/ssl/certs
	I1013 23:19:23.628444  639302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 23:19:23.638131  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:19:23.661852  639302 start.go:296] duration metric: took 171.238762ms for postStartSetup
	I1013 23:19:23.662290  639302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-557095
	I1013 23:19:23.679887  639302 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/config.json ...
	I1013 23:19:23.680157  639302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 23:19:23.680202  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:23.713630  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:23.821957  639302 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 23:19:23.828213  639302 start.go:128] duration metric: took 10.910522077s to createHost
	I1013 23:19:23.828243  639302 start.go:83] releasing machines lock for "auto-557095", held for 10.910657476s
	I1013 23:19:23.828398  639302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-557095
	I1013 23:19:23.846977  639302 ssh_runner.go:195] Run: cat /version.json
	I1013 23:19:23.847028  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:23.847050  639302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 23:19:23.847200  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:23.883350  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:23.891226  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:24.079815  639302 ssh_runner.go:195] Run: systemctl --version
	I1013 23:19:24.087363  639302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 23:19:24.144903  639302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 23:19:24.151259  639302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 23:19:24.151325  639302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 23:19:24.188755  639302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 23:19:24.188775  639302 start.go:495] detecting cgroup driver to use...
	I1013 23:19:24.188808  639302 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 23:19:24.188864  639302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 23:19:24.210214  639302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 23:19:24.227148  639302 docker.go:218] disabling cri-docker service (if available) ...
	I1013 23:19:24.227298  639302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 23:19:24.251001  639302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 23:19:24.270312  639302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 23:19:24.454434  639302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 23:19:24.641643  639302 docker.go:234] disabling docker service ...
	I1013 23:19:24.641699  639302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 23:19:24.673295  639302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 23:19:24.689277  639302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 23:19:24.888339  639302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 23:19:25.063664  639302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 23:19:25.078591  639302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 23:19:25.094791  639302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 23:19:25.094871  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.104470  639302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 23:19:25.104553  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.114313  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.124191  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.138561  639302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 23:19:25.148001  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.157522  639302 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.173350  639302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 23:19:25.185572  639302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 23:19:25.194549  639302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 23:19:25.203664  639302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:25.395584  639302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 23:19:25.621794  639302 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 23:19:25.621864  639302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 23:19:25.626181  639302 start.go:563] Will wait 60s for crictl version
	I1013 23:19:25.626246  639302 ssh_runner.go:195] Run: which crictl
	I1013 23:19:25.630432  639302 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 23:19:25.684186  639302 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 23:19:25.684273  639302 ssh_runner.go:195] Run: crio --version
	I1013 23:19:25.733376  639302 ssh_runner.go:195] Run: crio --version
	I1013 23:19:25.798693  639302 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 23:19:24.103164  639746 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-033746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:19:24.120871  639746 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 23:19:24.125308  639746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:19:24.136732  639746 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:19:24.136850  639746 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:19:24.136904  639746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:19:24.174594  639746 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:19:24.174613  639746 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:19:24.174667  639746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:19:24.212841  639746 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:19:24.212911  639746 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:19:24.212936  639746 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1013 23:19:24.213057  639746 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-033746 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:19:24.213160  639746 ssh_runner.go:195] Run: crio config
	I1013 23:19:24.308274  639746 cni.go:84] Creating CNI manager for ""
	I1013 23:19:24.308299  639746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:19:24.308350  639746 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:19:24.308381  639746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-033746 NodeName:default-k8s-diff-port-033746 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:19:24.308603  639746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-033746"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:19:24.308699  639746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:19:24.324749  639746 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:19:24.324911  639746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:19:24.334287  639746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1013 23:19:24.352282  639746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:19:24.366680  639746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1013 23:19:24.380595  639746 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:19:24.385310  639746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:19:24.395820  639746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:24.565488  639746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:19:24.594294  639746 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746 for IP: 192.168.85.2
	I1013 23:19:24.594324  639746 certs.go:195] generating shared ca certs ...
	I1013 23:19:24.594341  639746 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:24.594550  639746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:19:24.594639  639746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:19:24.594655  639746 certs.go:257] generating profile certs ...
	I1013 23:19:24.594780  639746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.key
	I1013 23:19:24.594891  639746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key.5040eb68
	I1013 23:19:24.594960  639746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key
	I1013 23:19:24.595131  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:19:24.595192  639746 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:19:24.595207  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:19:24.595253  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:19:24.595299  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:19:24.595349  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:19:24.595425  639746 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:19:24.596114  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:19:24.641041  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:19:24.704536  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:19:24.737058  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:19:24.790669  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 23:19:24.835663  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 23:19:24.900200  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:19:24.917493  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 23:19:24.937041  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:19:24.979811  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:19:25.005140  639746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:19:25.029435  639746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:19:25.043963  639746 ssh_runner.go:195] Run: openssl version
	I1013 23:19:25.050691  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:19:25.060440  639746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:19:25.067835  639746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:19:25.067987  639746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:19:25.118280  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:19:25.128500  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:19:25.138690  639746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:19:25.143482  639746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:19:25.143560  639746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:19:25.189172  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:19:25.198170  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:19:25.207954  639746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:25.213398  639746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:25.213478  639746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:25.264149  639746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:19:25.272420  639746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:19:25.284593  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 23:19:25.352756  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 23:19:25.443026  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 23:19:25.510921  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 23:19:25.636432  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 23:19:25.756697  639746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 23:19:25.835496  639746 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-033746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-033746 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:19:25.835585  639746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:19:25.835642  639746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:19:25.924948  639746 cri.go:89] found id: "066ad3d69ea84808c078d93b1f6265cfd21518d17a5db054d1b69f87ca56e952"
	I1013 23:19:25.924971  639746 cri.go:89] found id: "cab38f78f2c2f085857d4a3efa0373a4a503447eebfd8334b6524ca0ec415a07"
	I1013 23:19:25.924980  639746 cri.go:89] found id: "3f7f4bc1a19c7b8ca9e580a8effb1d745cb76de4a5ab7542321977f3bf56b636"
	I1013 23:19:25.924983  639746 cri.go:89] found id: "4e7274aa9666913e174875ca758f5279a206c60e735c947c6cd3cf7e67e99d2b"
	I1013 23:19:25.924987  639746 cri.go:89] found id: ""
	I1013 23:19:25.925037  639746 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 23:19:25.971412  639746 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T23:19:25Z" level=error msg="open /run/runc: no such file or directory"
	I1013 23:19:25.971509  639746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:19:25.995680  639746 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 23:19:25.995701  639746 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 23:19:25.995774  639746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 23:19:26.019441  639746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 23:19:26.019876  639746 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-033746" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:19:26.019997  639746 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-428797/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-033746" cluster setting kubeconfig missing "default-k8s-diff-port-033746" context setting]
	I1013 23:19:26.020356  639746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.021744  639746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 23:19:26.037804  639746 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 23:19:26.037851  639746 kubeadm.go:601] duration metric: took 42.14235ms to restartPrimaryControlPlane
	I1013 23:19:26.037860  639746 kubeadm.go:402] duration metric: took 202.376268ms to StartCluster
	I1013 23:19:26.037877  639746 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.037949  639746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:19:26.038680  639746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.038953  639746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:19:26.039210  639746 config.go:182] Loaded profile config "default-k8s-diff-port-033746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:19:26.039257  639746 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:19:26.039320  639746 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-033746"
	I1013 23:19:26.039334  639746 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-033746"
	W1013 23:19:26.039340  639746 addons.go:247] addon storage-provisioner should already be in state true
	I1013 23:19:26.039347  639746 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-033746"
	I1013 23:19:26.039361  639746 host.go:66] Checking if "default-k8s-diff-port-033746" exists ...
	I1013 23:19:26.039366  639746 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-033746"
	W1013 23:19:26.039372  639746 addons.go:247] addon dashboard should already be in state true
	I1013 23:19:26.039392  639746 host.go:66] Checking if "default-k8s-diff-port-033746" exists ...
	I1013 23:19:26.039809  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:26.039827  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:26.040252  639746 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-033746"
	I1013 23:19:26.040278  639746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-033746"
	I1013 23:19:26.040573  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:26.043582  639746 out.go:179] * Verifying Kubernetes components...
	I1013 23:19:26.048351  639746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:26.117941  639746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:19:26.119805  639746 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-033746"
	W1013 23:19:26.119826  639746 addons.go:247] addon default-storageclass should already be in state true
	I1013 23:19:26.119852  639746 host.go:66] Checking if "default-k8s-diff-port-033746" exists ...
	I1013 23:19:26.120958  639746 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:19:26.120978  639746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:19:26.121041  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:26.121338  639746 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-033746 --format={{.State.Status}}
	I1013 23:19:26.124811  639746 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 23:19:26.131223  639746 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 23:19:26.134199  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 23:19:26.134231  639746 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 23:19:26.134303  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:26.177050  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:26.183302  639746 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:19:26.183321  639746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:19:26.183382  639746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-033746
	I1013 23:19:26.183903  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:26.212776  639746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/default-k8s-diff-port-033746/id_rsa Username:docker}
	I1013 23:19:26.432196  639746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:19:26.505057  639746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:19:26.507558  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 23:19:26.507577  639746 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 23:19:26.540337  639746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:19:26.623405  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 23:19:26.623477  639746 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 23:19:26.767680  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 23:19:26.767755  639746 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 23:19:26.883685  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 23:19:26.883759  639746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 23:19:25.801836  639302 cli_runner.go:164] Run: docker network inspect auto-557095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 23:19:25.826727  639302 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 23:19:25.831160  639302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:19:25.855682  639302 kubeadm.go:883] updating cluster {Name:auto-557095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-557095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 23:19:25.855797  639302 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 23:19:25.855865  639302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:19:25.904337  639302 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:19:25.904364  639302 crio.go:433] Images already preloaded, skipping extraction
	I1013 23:19:25.904433  639302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 23:19:25.940769  639302 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 23:19:25.940796  639302 cache_images.go:85] Images are preloaded, skipping loading
	I1013 23:19:25.940804  639302 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 23:19:25.940894  639302 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-557095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-557095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 23:19:25.940974  639302 ssh_runner.go:195] Run: crio config
	I1013 23:19:26.086948  639302 cni.go:84] Creating CNI manager for ""
	I1013 23:19:26.086976  639302 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:19:26.087000  639302 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 23:19:26.087024  639302 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-557095 NodeName:auto-557095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 23:19:26.087281  639302 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-557095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 23:19:26.087357  639302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 23:19:26.097516  639302 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 23:19:26.097590  639302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 23:19:26.116814  639302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1013 23:19:26.142350  639302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 23:19:26.217477  639302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1013 23:19:26.233501  639302 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 23:19:26.243610  639302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 23:19:26.271578  639302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:26.497058  639302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:19:26.549185  639302 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095 for IP: 192.168.76.2
	I1013 23:19:26.549219  639302 certs.go:195] generating shared ca certs ...
	I1013 23:19:26.549240  639302 certs.go:227] acquiring lock for ca certs: {Name:mk5c8d44dec95378c0e1e24b9a8172d4520fe512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.549379  639302 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key
	I1013 23:19:26.549428  639302 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key
	I1013 23:19:26.549440  639302 certs.go:257] generating profile certs ...
	I1013 23:19:26.549492  639302 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.key
	I1013 23:19:26.549508  639302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt with IP's: []
	I1013 23:19:26.744846  639302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt ...
	I1013 23:19:26.744881  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: {Name:mk0fc3d55b404b59e78fcc97a03a72c2430acd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.745073  639302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.key ...
	I1013 23:19:26.745089  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.key: {Name:mkb7aa04b5f4ff3645d883ff3cba98a0fd4ee60b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:26.745174  639302 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key.c6e86103
	I1013 23:19:26.745194  639302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt.c6e86103 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 23:19:27.018805  639302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt.c6e86103 ...
	I1013 23:19:27.018840  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt.c6e86103: {Name:mk8ec37282c626c1658f4976f369f92f39f4bf71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:27.019068  639302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key.c6e86103 ...
	I1013 23:19:27.019103  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key.c6e86103: {Name:mk43d5c29e5b5df93e61c350ece6b1d46dcec909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:27.019226  639302 certs.go:382] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt.c6e86103 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt
	I1013 23:19:27.019319  639302 certs.go:386] copying /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key.c6e86103 -> /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key
	I1013 23:19:27.019380  639302 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.key
	I1013 23:19:27.019399  639302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.crt with IP's: []
	I1013 23:19:27.517916  639302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.crt ...
	I1013 23:19:27.517951  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.crt: {Name:mk59fa72232a9083545ed59272c701500ce02942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:27.518186  639302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.key ...
	I1013 23:19:27.518201  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.key: {Name:mkf5ee79cec32ff3de08ba41e9f0c35e7d1f8c81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:27.518412  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem (1338 bytes)
	W1013 23:19:27.518460  639302 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652_empty.pem, impossibly tiny 0 bytes
	I1013 23:19:27.518477  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 23:19:27.518500  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/ca.pem (1082 bytes)
	I1013 23:19:27.518527  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/cert.pem (1123 bytes)
	I1013 23:19:27.518553  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/certs/key.pem (1679 bytes)
	I1013 23:19:27.518608  639302 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem (1708 bytes)
	I1013 23:19:27.525289  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 23:19:27.595468  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 23:19:27.652958  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 23:19:27.689376  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 23:19:27.720324  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1013 23:19:27.750152  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 23:19:27.778212  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 23:19:27.801976  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 23:19:27.835744  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/ssl/certs/4306522.pem --> /usr/share/ca-certificates/4306522.pem (1708 bytes)
	I1013 23:19:27.864400  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 23:19:27.893721  639302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-428797/.minikube/certs/430652.pem --> /usr/share/ca-certificates/430652.pem (1338 bytes)
	I1013 23:19:27.922998  639302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 23:19:27.947044  639302 ssh_runner.go:195] Run: openssl version
	I1013 23:19:27.955741  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4306522.pem && ln -fs /usr/share/ca-certificates/4306522.pem /etc/ssl/certs/4306522.pem"
	I1013 23:19:27.969411  639302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4306522.pem
	I1013 23:19:27.974157  639302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 22:20 /usr/share/ca-certificates/4306522.pem
	I1013 23:19:27.974298  639302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4306522.pem
	I1013 23:19:28.020576  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4306522.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 23:19:28.031337  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 23:19:28.041908  639302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:28.046843  639302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 22:13 /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:28.046968  639302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 23:19:28.090780  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 23:19:28.100946  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/430652.pem && ln -fs /usr/share/ca-certificates/430652.pem /etc/ssl/certs/430652.pem"
	I1013 23:19:28.111004  639302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/430652.pem
	I1013 23:19:28.119579  639302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 22:20 /usr/share/ca-certificates/430652.pem
	I1013 23:19:28.119703  639302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/430652.pem
	I1013 23:19:28.162509  639302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/430652.pem /etc/ssl/certs/51391683.0"
	I1013 23:19:28.172311  639302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 23:19:28.177388  639302 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 23:19:28.177500  639302 kubeadm.go:400] StartCluster: {Name:auto-557095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-557095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 23:19:28.177627  639302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 23:19:28.177728  639302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 23:19:28.217737  639302 cri.go:89] found id: ""
	I1013 23:19:28.217888  639302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 23:19:28.226561  639302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 23:19:28.234738  639302 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 23:19:28.234808  639302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 23:19:28.251440  639302 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 23:19:28.251461  639302 kubeadm.go:157] found existing configuration files:
	
	I1013 23:19:28.251514  639302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 23:19:28.264032  639302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 23:19:28.264102  639302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 23:19:28.276389  639302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 23:19:28.287370  639302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 23:19:28.287439  639302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 23:19:28.299505  639302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 23:19:28.315593  639302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 23:19:28.315666  639302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 23:19:28.323060  639302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 23:19:28.336505  639302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 23:19:28.336578  639302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 23:19:28.346768  639302 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 23:19:28.430736  639302 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 23:19:28.430817  639302 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 23:19:28.475257  639302 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 23:19:28.475333  639302 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 23:19:28.475384  639302 kubeadm.go:318] OS: Linux
	I1013 23:19:28.475433  639302 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 23:19:28.475494  639302 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 23:19:28.475544  639302 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 23:19:28.475594  639302 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 23:19:28.475673  639302 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 23:19:28.475744  639302 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 23:19:28.475796  639302 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 23:19:28.475852  639302 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 23:19:28.475905  639302 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 23:19:28.651754  639302 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 23:19:28.651873  639302 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 23:19:28.651972  639302 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 23:19:28.669285  639302 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 23:19:26.983579  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 23:19:26.983663  639746 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 23:19:27.036225  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 23:19:27.036300  639746 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 23:19:27.083429  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 23:19:27.083505  639746 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 23:19:27.128175  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 23:19:27.128253  639746 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 23:19:27.172188  639746 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:19:27.172272  639746 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 23:19:27.204172  639746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 23:19:28.673215  639302 out.go:252]   - Generating certificates and keys ...
	I1013 23:19:28.673314  639302 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 23:19:28.673423  639302 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 23:19:29.054617  639302 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 23:19:30.577463  639302 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 23:19:31.151496  639302 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 23:19:31.239488  639302 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 23:19:31.443458  639302 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 23:19:31.443594  639302 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-557095 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 23:19:32.295435  639302 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 23:19:32.295569  639302 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-557095 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 23:19:32.689720  639746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.257438158s)
	I1013 23:19:35.717006  639746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.211917464s)
	I1013 23:19:35.717101  639746 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.17674318s)
	I1013 23:19:35.717157  639746 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-033746" to be "Ready" ...
	I1013 23:19:35.726045  639746 node_ready.go:49] node "default-k8s-diff-port-033746" is "Ready"
	I1013 23:19:35.726086  639746 node_ready.go:38] duration metric: took 8.908852ms for node "default-k8s-diff-port-033746" to be "Ready" ...
	I1013 23:19:35.726124  639746 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:19:35.726201  639746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:19:35.841511  639746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.637218136s)
	I1013 23:19:35.841773  639746 api_server.go:72] duration metric: took 9.802781824s to wait for apiserver process to appear ...
	I1013 23:19:35.841791  639746 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:19:35.841818  639746 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1013 23:19:35.844816  639746 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-033746 addons enable metrics-server
	
	I1013 23:19:35.847705  639746 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1013 23:19:33.276486  639302 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 23:19:33.557718  639302 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 23:19:34.521548  639302 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 23:19:34.522022  639302 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 23:19:35.081508  639302 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 23:19:35.214680  639302 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 23:19:35.979439  639302 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 23:19:36.258870  639302 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 23:19:36.572789  639302 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 23:19:36.573948  639302 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 23:19:36.577028  639302 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 23:19:35.850563  639746 addons.go:514] duration metric: took 9.811293999s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1013 23:19:35.856358  639746 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1013 23:19:35.860226  639746 api_server.go:141] control plane version: v1.34.1
	I1013 23:19:35.860258  639746 api_server.go:131] duration metric: took 18.45958ms to wait for apiserver health ...
	I1013 23:19:35.860268  639746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:19:35.867708  639746 system_pods.go:59] 8 kube-system pods found
	I1013 23:19:35.867751  639746 system_pods.go:61] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:19:35.867761  639746 system_pods.go:61] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:19:35.867768  639746 system_pods.go:61] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:19:35.867792  639746 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:19:35.867802  639746 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:19:35.867812  639746 system_pods.go:61] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:19:35.867822  639746 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:19:35.867833  639746 system_pods.go:61] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Running
	I1013 23:19:35.867840  639746 system_pods.go:74] duration metric: took 7.561516ms to wait for pod list to return data ...
	I1013 23:19:35.867856  639746 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:19:35.876839  639746 default_sa.go:45] found service account: "default"
	I1013 23:19:35.876869  639746 default_sa.go:55] duration metric: took 9.003875ms for default service account to be created ...
	I1013 23:19:35.876889  639746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:19:35.894506  639746 system_pods.go:86] 8 kube-system pods found
	I1013 23:19:35.894554  639746 system_pods.go:89] "coredns-66bc5c9577-qf4lq" [a75d4ff9-259b-4a0c-9c05-ce8343096549] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:19:35.894565  639746 system_pods.go:89] "etcd-default-k8s-diff-port-033746" [17279d69-e124-4cdc-9eba-e3bc453ddc89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 23:19:35.894571  639746 system_pods.go:89] "kindnet-vgn6v" [6a27f223-9eda-4489-a432-bd17dffee02c] Running
	I1013 23:19:35.894578  639746 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-033746" [4ab7e979-51a8-4f22-9cd0-15bcd011b463] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 23:19:35.894585  639746 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-033746" [d637e44b-740d-4ae7-9410-7226e3404945] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 23:19:35.894592  639746 system_pods.go:89] "kube-proxy-mxnv7" [ec497b3c-7371-4a5d-a3ac-be5240db89ca] Running
	I1013 23:19:35.894603  639746 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-033746" [bcb906cc-7b26-4db4-9f2b-8adc8400906c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 23:19:35.894620  639746 system_pods.go:89] "storage-provisioner" [bba169b1-b8a2-40d0-aa47-6ee1369a7107] Running
	I1013 23:19:35.894628  639746 system_pods.go:126] duration metric: took 17.731594ms to wait for k8s-apps to be running ...
	I1013 23:19:35.894640  639746 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:19:35.894702  639746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:19:35.927066  639746 system_svc.go:56] duration metric: took 32.4157ms WaitForService to wait for kubelet
	I1013 23:19:35.927122  639746 kubeadm.go:586] duration metric: took 9.888131087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:19:35.927142  639746 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:19:35.934390  639746 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:19:35.934437  639746 node_conditions.go:123] node cpu capacity is 2
	I1013 23:19:35.934451  639746 node_conditions.go:105] duration metric: took 7.302648ms to run NodePressure ...
	I1013 23:19:35.934468  639746 start.go:241] waiting for startup goroutines ...
	I1013 23:19:35.934481  639746 start.go:246] waiting for cluster config update ...
	I1013 23:19:35.934493  639746 start.go:255] writing updated cluster config ...
	I1013 23:19:35.934848  639746 ssh_runner.go:195] Run: rm -f paused
	I1013 23:19:35.940121  639746 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:19:35.950632  639746 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qf4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:19:36.581769  639302 out.go:252]   - Booting up control plane ...
	I1013 23:19:36.581901  639302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 23:19:36.582358  639302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 23:19:36.585491  639302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 23:19:36.609331  639302 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 23:19:36.609450  639302 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 23:19:36.618813  639302 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 23:19:36.618917  639302 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 23:19:36.618959  639302 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 23:19:36.839360  639302 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 23:19:36.839485  639302 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1013 23:19:37.956636  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:39.959553  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	I1013 23:19:37.843602  639302 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001077633s
	I1013 23:19:37.843805  639302 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 23:19:37.844094  639302 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1013 23:19:37.844213  639302 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 23:19:37.844800  639302 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1013 23:19:41.966901  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:44.457554  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	I1013 23:19:43.148826  639302 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.303819182s
	I1013 23:19:45.517018  639302 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.671784319s
	I1013 23:19:47.348652  639302 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.50211531s
	I1013 23:19:47.373372  639302 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 23:19:47.392751  639302 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 23:19:47.414075  639302 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 23:19:47.414570  639302 kubeadm.go:318] [mark-control-plane] Marking the node auto-557095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 23:19:47.430845  639302 kubeadm.go:318] [bootstrap-token] Using token: fl9uev.ja78svxq4m6apyxu
	I1013 23:19:47.433796  639302 out.go:252]   - Configuring RBAC rules ...
	I1013 23:19:47.433921  639302 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 23:19:47.441880  639302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 23:19:47.456252  639302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 23:19:47.462686  639302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 23:19:47.469093  639302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 23:19:47.476679  639302 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 23:19:47.753880  639302 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 23:19:48.206625  639302 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 23:19:48.754130  639302 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 23:19:48.755354  639302 kubeadm.go:318] 
	I1013 23:19:48.755438  639302 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 23:19:48.755451  639302 kubeadm.go:318] 
	I1013 23:19:48.755532  639302 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 23:19:48.755540  639302 kubeadm.go:318] 
	I1013 23:19:48.755573  639302 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 23:19:48.755639  639302 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 23:19:48.755701  639302 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 23:19:48.755709  639302 kubeadm.go:318] 
	I1013 23:19:48.755768  639302 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 23:19:48.755776  639302 kubeadm.go:318] 
	I1013 23:19:48.755826  639302 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 23:19:48.755835  639302 kubeadm.go:318] 
	I1013 23:19:48.755889  639302 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 23:19:48.755970  639302 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 23:19:48.756044  639302 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 23:19:48.756051  639302 kubeadm.go:318] 
	I1013 23:19:48.756138  639302 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 23:19:48.756221  639302 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 23:19:48.756229  639302 kubeadm.go:318] 
	I1013 23:19:48.756316  639302 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fl9uev.ja78svxq4m6apyxu \
	I1013 23:19:48.756433  639302 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 \
	I1013 23:19:48.756459  639302 kubeadm.go:318] 	--control-plane 
	I1013 23:19:48.756466  639302 kubeadm.go:318] 
	I1013 23:19:48.756555  639302 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 23:19:48.756565  639302 kubeadm.go:318] 
	I1013 23:19:48.756650  639302 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fl9uev.ja78svxq4m6apyxu \
	I1013 23:19:48.756758  639302 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:532ea8590bee4c301ef285f8e3492b8928a8eff65fba85967ed42e7c1c145ff6 
	I1013 23:19:48.761191  639302 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 23:19:48.761457  639302 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 23:19:48.761586  639302 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 23:19:48.761625  639302 cni.go:84] Creating CNI manager for ""
	I1013 23:19:48.761633  639302 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 23:19:48.766751  639302 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1013 23:19:46.957000  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:49.456819  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	I1013 23:19:48.769762  639302 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 23:19:48.773854  639302 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 23:19:48.773879  639302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 23:19:48.787688  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 23:19:49.111867  639302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 23:19:49.112033  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:49.112127  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-557095 minikube.k8s.io/updated_at=2025_10_13T23_19_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=auto-557095 minikube.k8s.io/primary=true
	I1013 23:19:49.276873  639302 ops.go:34] apiserver oom_adj: -16
	I1013 23:19:49.276983  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:49.777567  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:50.277263  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:50.778011  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:51.277042  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:51.778037  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:52.278092  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:52.777549  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:53.277099  639302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 23:19:53.435922  639302 kubeadm.go:1113] duration metric: took 4.323959141s to wait for elevateKubeSystemPrivileges
	I1013 23:19:53.435948  639302 kubeadm.go:402] duration metric: took 25.25845552s to StartCluster
	I1013 23:19:53.435965  639302 settings.go:142] acquiring lock: {Name:mk0afd9ff19edc9483d3606a8772ba9c7fa8543c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:53.436040  639302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:19:53.437028  639302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-428797/kubeconfig: {Name:mk4f12bc59ceabc6f249f1564ea8174aee5ee9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 23:19:53.437244  639302 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 23:19:53.437389  639302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 23:19:53.437638  639302 config.go:182] Loaded profile config "auto-557095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:19:53.437673  639302 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 23:19:53.437733  639302 addons.go:69] Setting storage-provisioner=true in profile "auto-557095"
	I1013 23:19:53.437747  639302 addons.go:238] Setting addon storage-provisioner=true in "auto-557095"
	I1013 23:19:53.437774  639302 host.go:66] Checking if "auto-557095" exists ...
	I1013 23:19:53.438273  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:53.438840  639302 addons.go:69] Setting default-storageclass=true in profile "auto-557095"
	I1013 23:19:53.438869  639302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-557095"
	I1013 23:19:53.439194  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:53.442113  639302 out.go:179] * Verifying Kubernetes components...
	I1013 23:19:53.458852  639302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 23:19:53.474408  639302 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 23:19:53.479200  639302 addons.go:238] Setting addon default-storageclass=true in "auto-557095"
	I1013 23:19:53.479241  639302 host.go:66] Checking if "auto-557095" exists ...
	I1013 23:19:53.479760  639302 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:19:53.479777  639302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 23:19:53.479830  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:53.480003  639302 cli_runner.go:164] Run: docker container inspect auto-557095 --format={{.State.Status}}
	I1013 23:19:53.527277  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:53.532607  639302 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 23:19:53.532631  639302 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 23:19:53.532698  639302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-557095
	I1013 23:19:53.558384  639302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/auto-557095/id_rsa Username:docker}
	I1013 23:19:53.804271  639302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 23:19:53.804632  639302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 23:19:53.807523  639302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 23:19:53.826462  639302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 23:19:53.857043  639302 node_ready.go:35] waiting up to 15m0s for node "auto-557095" to be "Ready" ...
	I1013 23:19:54.320471  639302 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1013 23:19:54.678706  639302 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1013 23:19:51.956615  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:53.956715  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:56.456816  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	I1013 23:19:54.681626  639302 addons.go:514] duration metric: took 1.243933963s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 23:19:54.825381  639302 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-557095" context rescaled to 1 replicas
	W1013 23:19:55.860151  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:19:58.956427  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:20:01.457274  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:19:57.860346  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:00.381021  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:03.956022  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:20:05.956499  639746 pod_ready.go:104] pod "coredns-66bc5c9577-qf4lq" is not "Ready", error: <nil>
	W1013 23:20:02.860033  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:04.860402  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:07.360147  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	I1013 23:20:07.456553  639746 pod_ready.go:94] pod "coredns-66bc5c9577-qf4lq" is "Ready"
	I1013 23:20:07.456584  639746 pod_ready.go:86] duration metric: took 31.50591381s for pod "coredns-66bc5c9577-qf4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.459699  639746 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.486147  639746 pod_ready.go:94] pod "etcd-default-k8s-diff-port-033746" is "Ready"
	I1013 23:20:07.486176  639746 pod_ready.go:86] duration metric: took 26.447156ms for pod "etcd-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.489232  639746 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.494475  639746 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-033746" is "Ready"
	I1013 23:20:07.494552  639746 pod_ready.go:86] duration metric: took 5.288904ms for pod "kube-apiserver-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.497504  639746 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.654481  639746 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-033746" is "Ready"
	I1013 23:20:07.654512  639746 pod_ready.go:86] duration metric: took 156.970722ms for pod "kube-controller-manager-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:07.854785  639746 pod_ready.go:83] waiting for pod "kube-proxy-mxnv7" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:08.254510  639746 pod_ready.go:94] pod "kube-proxy-mxnv7" is "Ready"
	I1013 23:20:08.254546  639746 pod_ready.go:86] duration metric: took 399.723751ms for pod "kube-proxy-mxnv7" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:08.454714  639746 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:08.854321  639746 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-033746" is "Ready"
	I1013 23:20:08.854352  639746 pod_ready.go:86] duration metric: took 399.612729ms for pod "kube-scheduler-default-k8s-diff-port-033746" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:08.854366  639746 pod_ready.go:40] duration metric: took 32.914195813s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:20:08.923018  639746 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:20:08.928247  639746 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-033746" cluster and "default" namespace by default
	W1013 23:20:09.366240  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:11.859852  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:13.861094  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:16.359959  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:18.360293  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:20.360717  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:22.860743  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:24.862094  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:27.360748  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:29.860555  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:31.860715  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	W1013 23:20:33.860913  639302 node_ready.go:57] node "auto-557095" has "Ready":"False" status (will retry)
	I1013 23:20:35.359868  639302 node_ready.go:49] node "auto-557095" is "Ready"
	I1013 23:20:35.359901  639302 node_ready.go:38] duration metric: took 41.502778675s for node "auto-557095" to be "Ready" ...
	I1013 23:20:35.359915  639302 api_server.go:52] waiting for apiserver process to appear ...
	I1013 23:20:35.359986  639302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 23:20:35.373372  639302 api_server.go:72] duration metric: took 41.936099931s to wait for apiserver process to appear ...
	I1013 23:20:35.373395  639302 api_server.go:88] waiting for apiserver healthz status ...
	I1013 23:20:35.373415  639302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 23:20:35.383278  639302 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 23:20:35.385592  639302 api_server.go:141] control plane version: v1.34.1
	I1013 23:20:35.385633  639302 api_server.go:131] duration metric: took 12.230297ms to wait for apiserver health ...
	I1013 23:20:35.385642  639302 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 23:20:35.390140  639302 system_pods.go:59] 8 kube-system pods found
	I1013 23:20:35.390256  639302 system_pods.go:61] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:20:35.390300  639302 system_pods.go:61] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:35.390333  639302 system_pods.go:61] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:35.390352  639302 system_pods.go:61] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:35.390391  639302 system_pods.go:61] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:35.390429  639302 system_pods.go:61] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:35.390448  639302 system_pods.go:61] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:35.390476  639302 system_pods.go:61] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:20:35.390530  639302 system_pods.go:74] duration metric: took 4.862411ms to wait for pod list to return data ...
	I1013 23:20:35.390573  639302 default_sa.go:34] waiting for default service account to be created ...
	I1013 23:20:35.410226  639302 default_sa.go:45] found service account: "default"
	I1013 23:20:35.410265  639302 default_sa.go:55] duration metric: took 19.662888ms for default service account to be created ...
	I1013 23:20:35.410276  639302 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 23:20:35.422371  639302 system_pods.go:86] 8 kube-system pods found
	I1013 23:20:35.422415  639302 system_pods.go:89] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:20:35.422426  639302 system_pods.go:89] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:35.422433  639302 system_pods.go:89] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:35.422438  639302 system_pods.go:89] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:35.422442  639302 system_pods.go:89] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:35.422447  639302 system_pods.go:89] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:35.422451  639302 system_pods.go:89] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:35.422459  639302 system_pods.go:89] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:20:35.422489  639302 retry.go:31] will retry after 203.233916ms: missing components: kube-dns
	I1013 23:20:35.634644  639302 system_pods.go:86] 8 kube-system pods found
	I1013 23:20:35.634683  639302 system_pods.go:89] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:20:35.634691  639302 system_pods.go:89] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:35.634698  639302 system_pods.go:89] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:35.634702  639302 system_pods.go:89] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:35.634706  639302 system_pods.go:89] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:35.634720  639302 system_pods.go:89] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:35.634724  639302 system_pods.go:89] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:35.634735  639302 system_pods.go:89] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:20:35.634756  639302 retry.go:31] will retry after 357.661569ms: missing components: kube-dns
	I1013 23:20:35.996643  639302 system_pods.go:86] 8 kube-system pods found
	I1013 23:20:35.996683  639302 system_pods.go:89] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:20:35.996691  639302 system_pods.go:89] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:35.996698  639302 system_pods.go:89] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:35.996703  639302 system_pods.go:89] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:35.996707  639302 system_pods.go:89] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:35.996712  639302 system_pods.go:89] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:35.996716  639302 system_pods.go:89] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:35.996722  639302 system_pods.go:89] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:20:35.996743  639302 retry.go:31] will retry after 305.740238ms: missing components: kube-dns
	I1013 23:20:36.309467  639302 system_pods.go:86] 8 kube-system pods found
	I1013 23:20:36.309511  639302 system_pods.go:89] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 23:20:36.309519  639302 system_pods.go:89] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:36.309526  639302 system_pods.go:89] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:36.309530  639302 system_pods.go:89] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:36.309534  639302 system_pods.go:89] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:36.309539  639302 system_pods.go:89] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:36.309543  639302 system_pods.go:89] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:36.309548  639302 system_pods.go:89] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 23:20:36.309564  639302 retry.go:31] will retry after 454.04081ms: missing components: kube-dns
	I1013 23:20:36.767652  639302 system_pods.go:86] 8 kube-system pods found
	I1013 23:20:36.767689  639302 system_pods.go:89] "coredns-66bc5c9577-74t9m" [cfcff25d-d6e6-43b2-9a0b-284f406a1bb7] Running
	I1013 23:20:36.767696  639302 system_pods.go:89] "etcd-auto-557095" [73418d42-c6d5-43a4-960d-83a0c53a557f] Running
	I1013 23:20:36.767700  639302 system_pods.go:89] "kindnet-976nw" [b5fcba75-3119-4021-b9b3-5c2848742391] Running
	I1013 23:20:36.767705  639302 system_pods.go:89] "kube-apiserver-auto-557095" [a7c1671e-9c4c-481f-a9ba-69d3ba10a7ab] Running
	I1013 23:20:36.767709  639302 system_pods.go:89] "kube-controller-manager-auto-557095" [df6cf89e-718a-4975-bc8a-d7e11f396d5a] Running
	I1013 23:20:36.767747  639302 system_pods.go:89] "kube-proxy-2hnwf" [0db3252d-ce63-4f95-9413-ea46d293b883] Running
	I1013 23:20:36.767759  639302 system_pods.go:89] "kube-scheduler-auto-557095" [4ec12b3d-a634-4238-bcd1-7afa0cffb115] Running
	I1013 23:20:36.767765  639302 system_pods.go:89] "storage-provisioner" [ca6a46fa-422f-48ae-91dd-09d07f7fa3fd] Running
	I1013 23:20:36.767774  639302 system_pods.go:126] duration metric: took 1.357491783s to wait for k8s-apps to be running ...
	I1013 23:20:36.767786  639302 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 23:20:36.767854  639302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 23:20:36.781759  639302 system_svc.go:56] duration metric: took 13.961881ms WaitForService to wait for kubelet
	I1013 23:20:36.781786  639302 kubeadm.go:586] duration metric: took 43.344520207s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 23:20:36.781814  639302 node_conditions.go:102] verifying NodePressure condition ...
	I1013 23:20:36.785460  639302 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 23:20:36.785497  639302 node_conditions.go:123] node cpu capacity is 2
	I1013 23:20:36.785513  639302 node_conditions.go:105] duration metric: took 3.694007ms to run NodePressure ...
	I1013 23:20:36.785526  639302 start.go:241] waiting for startup goroutines ...
	I1013 23:20:36.785536  639302 start.go:246] waiting for cluster config update ...
	I1013 23:20:36.785548  639302 start.go:255] writing updated cluster config ...
	I1013 23:20:36.785882  639302 ssh_runner.go:195] Run: rm -f paused
	I1013 23:20:36.789650  639302 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:20:36.793393  639302 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-74t9m" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.798604  639302 pod_ready.go:94] pod "coredns-66bc5c9577-74t9m" is "Ready"
	I1013 23:20:36.798633  639302 pod_ready.go:86] duration metric: took 5.214921ms for pod "coredns-66bc5c9577-74t9m" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.801312  639302 pod_ready.go:83] waiting for pod "etcd-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.806548  639302 pod_ready.go:94] pod "etcd-auto-557095" is "Ready"
	I1013 23:20:36.806578  639302 pod_ready.go:86] duration metric: took 5.241521ms for pod "etcd-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.809227  639302 pod_ready.go:83] waiting for pod "kube-apiserver-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.814381  639302 pod_ready.go:94] pod "kube-apiserver-auto-557095" is "Ready"
	I1013 23:20:36.814414  639302 pod_ready.go:86] duration metric: took 5.162417ms for pod "kube-apiserver-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:36.817085  639302 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:37.194222  639302 pod_ready.go:94] pod "kube-controller-manager-auto-557095" is "Ready"
	I1013 23:20:37.194252  639302 pod_ready.go:86] duration metric: took 377.138693ms for pod "kube-controller-manager-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:37.397257  639302 pod_ready.go:83] waiting for pod "kube-proxy-2hnwf" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:37.794260  639302 pod_ready.go:94] pod "kube-proxy-2hnwf" is "Ready"
	I1013 23:20:37.794338  639302 pod_ready.go:86] duration metric: took 397.044859ms for pod "kube-proxy-2hnwf" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:37.993774  639302 pod_ready.go:83] waiting for pod "kube-scheduler-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:38.396070  639302 pod_ready.go:94] pod "kube-scheduler-auto-557095" is "Ready"
	I1013 23:20:38.396146  639302 pod_ready.go:86] duration metric: took 402.33858ms for pod "kube-scheduler-auto-557095" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 23:20:38.396171  639302 pod_ready.go:40] duration metric: took 1.606487824s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 23:20:38.451253  639302 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 23:20:38.454336  639302 out.go:179] * Done! kubectl is now configured to use "auto-557095" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 23:20:14 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:14.088351243Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 23:20:32 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:32.856279343Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=307e3bb8-2169-412c-a93a-bdc84bfcf990 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.058530313Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bf279553-638e-44a0-9539-7d53f323d396 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.105996007Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv/dashboard-metrics-scraper" id=7f712469-04ca-4b3d-ae71-1cd9a56db2ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.106289039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.115762878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.116608326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.147824651Z" level=info msg="Created container 56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv/dashboard-metrics-scraper" id=7f712469-04ca-4b3d-ae71-1cd9a56db2ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.149009777Z" level=info msg="Starting container: 56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b" id=b18c7846-4c96-4d1a-87dc-7a0fa81d54ec name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.150905329Z" level=info msg="Started container" PID=1709 containerID=56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv/dashboard-metrics-scraper id=b18c7846-4c96-4d1a-87dc-7a0fa81d54ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=9cc108d4284eb40a5d50319faed914fc495d8e759cb4e474538acaf5a3ec28be
	Oct 13 23:20:33 default-k8s-diff-port-033746 conmon[1706]: conmon 56457140a6afa533157c <ninfo>: container 1709 exited with status 1
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.220040059Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf" id=e66ed8c1-80fd-4bf0-8e63-e1b81a70485f name=/runtime.v1.ImageService/PullImage
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.220737595Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=23443f62-f96f-4b47-83ed-3e6500be003a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.223508453Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a657fccc-c56e-45bc-8c1d-7035e97bd082 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.232275648Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gck5m/kubernetes-dashboard" id=d013c055-41eb-4913-ad45-148933bb446b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.233142864Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.238266717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.238629065Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f2845502e3c5154a9b6d23541058b2b90529d18ee9f638f710165f2a4722edda/merged/etc/group: no such file or directory"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.239239186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.267455932Z" level=info msg="Created container 224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gck5m/kubernetes-dashboard" id=d013c055-41eb-4913-ad45-148933bb446b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.271900939Z" level=info msg="Starting container: 224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a" id=c88dea41-bf43-4762-8597-4b15b73a1c1b name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.272962711Z" level=info msg="Removing container: 6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135" id=e035693e-3b8d-45a7-bfd5-08529d76e28a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.284901798Z" level=info msg="Started container" PID=1719 containerID=224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gck5m/kubernetes-dashboard id=c88dea41-bf43-4762-8597-4b15b73a1c1b name=/runtime.v1.RuntimeService/StartContainer sandboxID=67ef5446bd7f6d397cdc0e57f60668334b65e09d18f635586e7c008d1c284d6e
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.29323495Z" level=info msg="Error loading conmon cgroup of container 6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135: cgroup deleted" id=e035693e-3b8d-45a7-bfd5-08529d76e28a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 23:20:33 default-k8s-diff-port-033746 crio[652]: time="2025-10-13T23:20:33.299524532Z" level=info msg="Removed container 6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv/dashboard-metrics-scraper" id=e035693e-3b8d-45a7-bfd5-08529d76e28a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	224275fc9d0a4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   18 seconds ago       Running             kubernetes-dashboard        0                   67ef5446bd7f6       kubernetes-dashboard-855c9754f9-gck5m                  kubernetes-dashboard
	56457140a6afa       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   3                   9cc108d4284eb       dashboard-metrics-scraper-6ffb444bf9-gmddv             kubernetes-dashboard
	5c555d4efff48       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           45 seconds ago       Running             storage-provisioner         2                   e155be53020ed       storage-provisioner                                    kube-system
	07ae824f8dd13       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   5bc545837bce4       coredns-66bc5c9577-qf4lq                               kube-system
	2744854e183c6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   e155be53020ed       storage-provisioner                                    kube-system
	00a0781d38ad3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   a647b1fc2f70e       busybox                                                default
	f817315f7da05       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   d59e813082648       kube-proxy-mxnv7                                       kube-system
	627054f4b8711       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   ae8465e3d2c8b       kindnet-vgn6v                                          kube-system
	066ad3d69ea84       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   00b8ffcd9e676       etcd-default-k8s-diff-port-033746                      kube-system
	cab38f78f2c2f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   be98ef070aa07       kube-controller-manager-default-k8s-diff-port-033746   kube-system
	3f7f4bc1a19c7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   07fe84a0db6b9       kube-apiserver-default-k8s-diff-port-033746            kube-system
	4e7274aa96669       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0e72edea0e13a       kube-scheduler-default-k8s-diff-port-033746            kube-system
	
	
	==> coredns [07ae824f8dd13988631a49a5321f83059aa5d43e097358a27639066d210ec4c1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38718 - 11337 "HINFO IN 7413653779595176445.904703530762834880. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.026886932s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-033746
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-033746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=default-k8s-diff-port-033746
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T23_17_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 23:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-033746
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 23:20:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 23:20:23 +0000   Mon, 13 Oct 2025 23:17:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 23:20:23 +0000   Mon, 13 Oct 2025 23:17:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 23:20:23 +0000   Mon, 13 Oct 2025 23:17:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 23:20:23 +0000   Mon, 13 Oct 2025 23:18:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-033746
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc522ae4bedd3269a6b4b80d68ed054e
	  System UUID:                b334b9dc-cabb-43d9-9bf2-cf916bb499bf
	  Boot ID:                    dd7cc516-027d-429f-8a1d-9042f0d8afad
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 coredns-66bc5c9577-qf4lq                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m47s
	  kube-system                 etcd-default-k8s-diff-port-033746                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m53s
	  kube-system                 kindnet-vgn6v                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m48s
	  kube-system                 kube-apiserver-default-k8s-diff-port-033746             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-033746    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 kube-proxy-mxnv7                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  kube-system                 kube-scheduler-default-k8s-diff-port-033746             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m55s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gmddv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gck5m                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m45s                kube-proxy       
	  Normal   Starting                 74s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  3m4s (x8 over 3m4s)  kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m4s (x8 over 3m4s)  kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m4s (x8 over 3m4s)  kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m53s                kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m53s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m53s                kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m53s                kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m53s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m48s                node-controller  Node default-k8s-diff-port-033746 event: Registered Node default-k8s-diff-port-033746 in Controller
	  Normal   NodeReady                2m6s                 kubelet          Node default-k8s-diff-port-033746 status is now: NodeReady
	  Normal   Starting                 88s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 88s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s (x8 over 88s)    kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 88s)    kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s (x8 over 88s)    kubelet          Node default-k8s-diff-port-033746 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           76s                  node-controller  Node default-k8s-diff-port-033746 event: Registered Node default-k8s-diff-port-033746 in Controller
	
	
	==> dmesg <==
	[Oct13 22:57] overlayfs: idmapped layers are currently not supported
	[ +25.225942] overlayfs: idmapped layers are currently not supported
	[Oct13 22:58] overlayfs: idmapped layers are currently not supported
	[Oct13 22:59] overlayfs: idmapped layers are currently not supported
	[Oct13 23:00] overlayfs: idmapped layers are currently not supported
	[Oct13 23:01] overlayfs: idmapped layers are currently not supported
	[Oct13 23:03] overlayfs: idmapped layers are currently not supported
	[Oct13 23:05] overlayfs: idmapped layers are currently not supported
	[ +31.793671] overlayfs: idmapped layers are currently not supported
	[Oct13 23:07] overlayfs: idmapped layers are currently not supported
	[Oct13 23:09] overlayfs: idmapped layers are currently not supported
	[Oct13 23:10] overlayfs: idmapped layers are currently not supported
	[Oct13 23:11] overlayfs: idmapped layers are currently not supported
	[  +0.256041] overlayfs: idmapped layers are currently not supported
	[ +43.086148] overlayfs: idmapped layers are currently not supported
	[Oct13 23:13] overlayfs: idmapped layers are currently not supported
	[Oct13 23:14] overlayfs: idmapped layers are currently not supported
	[Oct13 23:15] overlayfs: idmapped layers are currently not supported
	[Oct13 23:16] overlayfs: idmapped layers are currently not supported
	[ +36.293322] overlayfs: idmapped layers are currently not supported
	[Oct13 23:17] overlayfs: idmapped layers are currently not supported
	[Oct13 23:18] overlayfs: idmapped layers are currently not supported
	[ +26.588739] overlayfs: idmapped layers are currently not supported
	[Oct13 23:19] overlayfs: idmapped layers are currently not supported
	[ +12.709304] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [066ad3d69ea84808c078d93b1f6265cfd21518d17a5db054d1b69f87ca56e952] <==
	{"level":"warn","ts":"2025-10-13T23:19:29.605995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.635281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.674216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.730976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.749802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.811325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.831704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.944047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.946723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:29.998885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.093797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.167248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.209891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.275466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.315866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.379754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.380580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.417377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.439843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.471444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.500147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.532500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.571171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.596339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T23:19:30.695156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59372","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:20:52 up  3:03,  0 user,  load average: 2.79, 3.60, 2.97
	Linux default-k8s-diff-port-033746 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [627054f4b8711bf5c68f79b3ba67430e516c8873d1bc2dac09c6d20b34208388] <==
	I1013 23:19:33.703483       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 23:19:33.703666       1 main.go:148] setting mtu 1500 for CNI 
	I1013 23:19:33.703680       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 23:19:33.703692       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T23:19:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 23:19:34.076721       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 23:19:34.076751       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 23:19:34.076761       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 23:19:34.077090       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 23:20:04.068893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 23:20:04.077509       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 23:20:04.077627       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 23:20:04.077715       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1013 23:20:05.577662       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 23:20:05.577690       1 metrics.go:72] Registering metrics
	I1013 23:20:05.577756       1 controller.go:711] "Syncing nftables rules"
	I1013 23:20:14.072495       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:20:14.072616       1 main.go:301] handling current node
	I1013 23:20:24.067764       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:20:24.067796       1 main.go:301] handling current node
	I1013 23:20:34.069167       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:20:34.069201       1 main.go:301] handling current node
	I1013 23:20:44.072518       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 23:20:44.072555       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3f7f4bc1a19c7b8ca9e580a8effb1d745cb76de4a5ab7542321977f3bf56b636] <==
	I1013 23:19:32.092331       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 23:19:32.115945       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 23:19:32.235431       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 23:19:32.244563       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 23:19:32.244593       1 policy_source.go:240] refreshing policies
	I1013 23:19:32.287703       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 23:19:32.289016       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 23:19:32.289370       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 23:19:32.290193       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 23:19:32.290207       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 23:19:32.305559       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 23:19:32.375236       1 cache.go:39] Caches are synced for autoregister controller
	I1013 23:19:32.379989       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 23:19:32.463459       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 23:19:32.860122       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 23:19:32.917696       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 23:19:34.983689       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 23:19:35.250299       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 23:19:35.395975       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 23:19:35.453902       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 23:19:35.802221       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.18.251"}
	I1013 23:19:35.834893       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.34.19"}
	I1013 23:19:37.020362       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 23:19:37.122033       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 23:19:37.167796       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [cab38f78f2c2f085857d4a3efa0373a4a503447eebfd8334b6524ca0ec415a07] <==
	I1013 23:19:36.799137       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 23:19:36.799264       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 23:19:36.805300       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 23:19:36.806519       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 23:19:36.811139       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 23:19:36.815633       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 23:19:36.815722       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 23:19:36.819151       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 23:19:36.823205       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 23:19:36.823311       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 23:19:36.824034       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 23:19:36.824100       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 23:19:36.831176       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 23:19:36.831452       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 23:19:36.839205       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 23:19:36.839396       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 23:19:36.839531       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 23:19:36.839573       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 23:19:36.842529       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 23:19:36.854088       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 23:19:36.854215       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 23:19:36.960098       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:19:36.981919       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 23:19:36.982022       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 23:19:36.982054       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [f817315f7da05cc291c73bfaf16bad680cb70bb5ff043f18fa59f7ada7fb3215] <==
	I1013 23:19:36.970438       1 server_linux.go:53] "Using iptables proxy"
	I1013 23:19:37.516947       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 23:19:37.617696       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 23:19:37.617820       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 23:19:37.617924       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 23:19:37.655292       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 23:19:37.655363       1 server_linux.go:132] "Using iptables Proxier"
	I1013 23:19:37.659663       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 23:19:37.660015       1 server.go:527] "Version info" version="v1.34.1"
	I1013 23:19:37.660084       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:19:37.661409       1 config.go:200] "Starting service config controller"
	I1013 23:19:37.661502       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 23:19:37.661560       1 config.go:106] "Starting endpoint slice config controller"
	I1013 23:19:37.661612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 23:19:37.661653       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 23:19:37.661682       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 23:19:37.662341       1 config.go:309] "Starting node config controller"
	I1013 23:19:37.662402       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 23:19:37.662434       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 23:19:37.761769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 23:19:37.761811       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 23:19:37.761854       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4e7274aa9666913e174875ca758f5279a206c60e735c947c6cd3cf7e67e99d2b] <==
	I1013 23:19:31.864259       1 serving.go:386] Generated self-signed cert in-memory
	I1013 23:19:37.068889       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 23:19:37.068995       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 23:19:37.074809       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 23:19:37.075265       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 23:19:37.075343       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 23:19:37.075425       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 23:19:37.084794       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:19:37.089468       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 23:19:37.089582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:19:37.089623       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:19:37.178046       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 23:19:37.191210       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 23:19:37.191351       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 23:19:46 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:46.113632     784 scope.go:117] "RemoveContainer" containerID="79a0ab0cea63aaed6ca2f9e3a2307fbba0b9100905df937fd05be620b3415db1"
	Oct 13 23:19:46 default-k8s-diff-port-033746 kubelet[784]: E1013 23:19:46.113784     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:19:47 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:47.117611     784 scope.go:117] "RemoveContainer" containerID="79a0ab0cea63aaed6ca2f9e3a2307fbba0b9100905df937fd05be620b3415db1"
	Oct 13 23:19:47 default-k8s-diff-port-033746 kubelet[784]: E1013 23:19:47.117775     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:19:48 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:48.125260     784 scope.go:117] "RemoveContainer" containerID="79a0ab0cea63aaed6ca2f9e3a2307fbba0b9100905df937fd05be620b3415db1"
	Oct 13 23:19:48 default-k8s-diff-port-033746 kubelet[784]: E1013 23:19:48.125463     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:19:58 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:58.855189     784 scope.go:117] "RemoveContainer" containerID="79a0ab0cea63aaed6ca2f9e3a2307fbba0b9100905df937fd05be620b3415db1"
	Oct 13 23:19:59 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:59.158425     784 scope.go:117] "RemoveContainer" containerID="79a0ab0cea63aaed6ca2f9e3a2307fbba0b9100905df937fd05be620b3415db1"
	Oct 13 23:19:59 default-k8s-diff-port-033746 kubelet[784]: I1013 23:19:59.158641     784 scope.go:117] "RemoveContainer" containerID="6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135"
	Oct 13 23:19:59 default-k8s-diff-port-033746 kubelet[784]: E1013 23:19:59.158813     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:20:06 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:06.183907     784 scope.go:117] "RemoveContainer" containerID="2744854e183c6d04900672bd669f244f681e44096d61da7ce2a00ed165ae9394"
	Oct 13 23:20:07 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:07.708958     784 scope.go:117] "RemoveContainer" containerID="6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135"
	Oct 13 23:20:07 default-k8s-diff-port-033746 kubelet[784]: E1013 23:20:07.709125     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:20:18 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:18.855522     784 scope.go:117] "RemoveContainer" containerID="6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135"
	Oct 13 23:20:18 default-k8s-diff-port-033746 kubelet[784]: E1013 23:20:18.855781     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:20:32 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:32.855354     784 scope.go:117] "RemoveContainer" containerID="6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135"
	Oct 13 23:20:33 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:33.256137     784 scope.go:117] "RemoveContainer" containerID="6058fd4d3c9a32684ca0ff52bf389545acc9e08b680cb7b533054ddf1edfa135"
	Oct 13 23:20:33 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:33.256471     784 scope.go:117] "RemoveContainer" containerID="56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b"
	Oct 13 23:20:33 default-k8s-diff-port-033746 kubelet[784]: E1013 23:20:33.256623     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:20:37 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:37.708817     784 scope.go:117] "RemoveContainer" containerID="56457140a6afa533157c919d2ad68f51c188ee4238c312cd7ae98e8529eca08b"
	Oct 13 23:20:37 default-k8s-diff-port-033746 kubelet[784]: E1013 23:20:37.709511     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gmddv_kubernetes-dashboard(275af32b-5420-49dc-8e5b-d1ee507da97e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gmddv" podUID="275af32b-5420-49dc-8e5b-d1ee507da97e"
	Oct 13 23:20:37 default-k8s-diff-port-033746 kubelet[784]: I1013 23:20:37.724999     784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gck5m" podStartSLOduration=5.2720836030000005 podStartE2EDuration="1m0.7249799s" podCreationTimestamp="2025-10-13 23:19:37 +0000 UTC" firstStartedPulling="2025-10-13 23:19:37.769419893 +0000 UTC m=+13.185110335" lastFinishedPulling="2025-10-13 23:20:33.22231619 +0000 UTC m=+68.638006632" observedRunningTime="2025-10-13 23:20:34.274546172 +0000 UTC m=+69.690236614" watchObservedRunningTime="2025-10-13 23:20:37.7249799 +0000 UTC m=+73.140670350"
	Oct 13 23:20:46 default-k8s-diff-port-033746 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 23:20:46 default-k8s-diff-port-033746 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 23:20:46 default-k8s-diff-port-033746 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [224275fc9d0a488bfafec1602fc9249090a6c390eb0c7e47ca01094727aa8a0a] <==
	2025/10/13 23:20:33 Using namespace: kubernetes-dashboard
	2025/10/13 23:20:33 Using in-cluster config to connect to apiserver
	2025/10/13 23:20:33 Using secret token for csrf signing
	2025/10/13 23:20:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 23:20:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 23:20:33 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 23:20:33 Generating JWE encryption key
	2025/10/13 23:20:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 23:20:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 23:20:34 Initializing JWE encryption key from synchronized object
	2025/10/13 23:20:34 Creating in-cluster Sidecar client
	2025/10/13 23:20:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 23:20:34 Serving insecurely on HTTP port: 9090
	2025/10/13 23:20:33 Starting overwatch
	
	
	==> storage-provisioner [2744854e183c6d04900672bd669f244f681e44096d61da7ce2a00ed165ae9394] <==
	I1013 23:19:35.423174       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 23:20:05.967299       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5c555d4efff48ea336088981b3246ac8d7f5cb5d4c6d286df5c7bd6fba460d35] <==
	W1013 23:20:27.688949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:29.692267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:29.698246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:31.702775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:31.714902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:33.717770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:33.722796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:35.725738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:35.730495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:37.737815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:37.743341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:39.746400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:39.754150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:41.757436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:41.762687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:43.765546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:43.772504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:45.776264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:45.789712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:47.793058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:47.801110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:49.804462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:49.810094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:51.813360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 23:20:51.820005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746: exit status 2 (391.908188ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-033746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.24s)
E1013 23:26:33.593807  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (260/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.94
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 7.37
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 172.79
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.8
48 TestAddons/StoppedEnableDisable 12.4
49 TestCertOptions 44.61
50 TestCertExpiration 248.94
52 TestForceSystemdFlag 38.42
53 TestForceSystemdEnv 39.19
59 TestErrorSpam/setup 33.84
60 TestErrorSpam/start 0.78
61 TestErrorSpam/status 1.14
62 TestErrorSpam/pause 6
63 TestErrorSpam/unpause 5.76
64 TestErrorSpam/stop 1.53
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 82.34
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 27.67
71 TestFunctional/serial/KubeContext 0.07
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.66
76 TestFunctional/serial/CacheCmd/cache/add_local 1.08
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 40
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.44
87 TestFunctional/serial/LogsFileCmd 1.51
88 TestFunctional/serial/InvalidService 4.37
90 TestFunctional/parallel/ConfigCmd 0.52
91 TestFunctional/parallel/DashboardCmd 12.04
92 TestFunctional/parallel/DryRun 0.62
93 TestFunctional/parallel/InternationalLanguage 0.28
94 TestFunctional/parallel/StatusCmd 1.39
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 23.61
102 TestFunctional/parallel/SSHCmd 0.71
103 TestFunctional/parallel/CpCmd 2.46
105 TestFunctional/parallel/FileSync 0.4
106 TestFunctional/parallel/CertSync 2.22
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
114 TestFunctional/parallel/License 0.38
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.46
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
128 TestFunctional/parallel/ProfileCmd/profile_list 0.42
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
130 TestFunctional/parallel/MountCmd/any-port 7.09
131 TestFunctional/parallel/MountCmd/specific-port 2.24
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
133 TestFunctional/parallel/ServiceCmd/List 0.95
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
138 TestFunctional/parallel/Version/short 0.08
139 TestFunctional/parallel/Version/components 1.55
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
144 TestFunctional/parallel/ImageCommands/ImageBuild 4.01
145 TestFunctional/parallel/ImageCommands/Setup 0.66
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 205.35
164 TestMultiControlPlane/serial/DeployApp 7.7
165 TestMultiControlPlane/serial/PingHostFromPods 1.51
166 TestMultiControlPlane/serial/AddWorkerNode 61.67
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
169 TestMultiControlPlane/serial/CopyFile 20.37
170 TestMultiControlPlane/serial/StopSecondaryNode 12.89
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
172 TestMultiControlPlane/serial/RestartSecondaryNode 26.42
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.2
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 119.97
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.88
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
177 TestMultiControlPlane/serial/StopCluster 35.99
178 TestMultiControlPlane/serial/RestartCluster 90.07
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.82
180 TestMultiControlPlane/serial/AddSecondaryNode 78.96
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
185 TestJSONOutput/start/Command 78.17
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.86
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 44.08
211 TestKicCustomNetwork/use_default_bridge_network 39.85
212 TestKicExistingNetwork 36.8
213 TestKicCustomSubnet 34.75
214 TestKicStaticIP 36.83
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 75.92
219 TestMountStart/serial/StartWithMountFirst 9.69
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 7.49
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 7.94
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 132.94
231 TestMultiNode/serial/DeployApp2Nodes 5.01
232 TestMultiNode/serial/PingHostFrom2Pods 0.92
233 TestMultiNode/serial/AddNode 59.44
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.4
237 TestMultiNode/serial/StopNode 2.41
238 TestMultiNode/serial/StartAfterStop 8.04
239 TestMultiNode/serial/RestartKeepsNodes 72.61
240 TestMultiNode/serial/DeleteNode 5.79
241 TestMultiNode/serial/StopMultiNode 24.06
242 TestMultiNode/serial/RestartMultiNode 52.66
243 TestMultiNode/serial/ValidateNameConflict 40.45
248 TestPreload 137.13
250 TestScheduledStopUnix 108.95
253 TestInsufficientStorage 11.38
254 TestRunningBinaryUpgrade 53.88
256 TestKubernetesUpgrade 364.92
257 TestMissingContainerUpgrade 116.64
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 47.76
261 TestNoKubernetes/serial/StartWithStopK8s 20.48
262 TestNoKubernetes/serial/Start 5.72
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
264 TestNoKubernetes/serial/ProfileList 0.71
265 TestNoKubernetes/serial/Stop 1.3
266 TestNoKubernetes/serial/StartNoArgs 6.95
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
268 TestStoppedBinaryUpgrade/Setup 0.78
269 TestStoppedBinaryUpgrade/Upgrade 60.78
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
279 TestPause/serial/Start 80.6
280 TestPause/serial/SecondStartNoReconfiguration 32.14
289 TestNetworkPlugins/group/false 5.7
294 TestStartStop/group/old-k8s-version/serial/FirstStart 69.97
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.47
297 TestStartStop/group/old-k8s-version/serial/Stop 12.15
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.31
299 TestStartStop/group/old-k8s-version/serial/SecondStart 52.11
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
305 TestStartStop/group/no-preload/serial/FirstStart 87.94
307 TestStartStop/group/embed-certs/serial/FirstStart 88.19
308 TestStartStop/group/no-preload/serial/DeployApp 8.33
310 TestStartStop/group/no-preload/serial/Stop 12.05
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/no-preload/serial/SecondStart 51.5
313 TestStartStop/group/embed-certs/serial/DeployApp 9.53
315 TestStartStop/group/embed-certs/serial/Stop 12.27
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
317 TestStartStop/group/embed-certs/serial/SecondStart 55.36
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.05
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
329 TestStartStop/group/newest-cni/serial/FirstStart 40.47
330 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/Stop 1.54
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
334 TestStartStop/group/newest-cni/serial/SecondStart 17.49
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.39
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.9
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
342 TestNetworkPlugins/group/auto/Start 85.84
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.48
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 31.01
346 TestNetworkPlugins/group/auto/KubeletFlags 0.32
347 TestNetworkPlugins/group/auto/NetCatPod 11.28
348 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
349 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
351 TestNetworkPlugins/group/auto/DNS 0.2
352 TestNetworkPlugins/group/auto/Localhost 0.16
353 TestNetworkPlugins/group/auto/HairPin 0.22
354 TestNetworkPlugins/group/kindnet/Start 83.67
355 TestNetworkPlugins/group/calico/Start 62.43
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/calico/KubeletFlags 0.35
359 TestNetworkPlugins/group/calico/NetCatPod 12.28
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
361 TestNetworkPlugins/group/kindnet/NetCatPod 12.38
362 TestNetworkPlugins/group/calico/DNS 0.17
363 TestNetworkPlugins/group/calico/Localhost 0.15
364 TestNetworkPlugins/group/calico/HairPin 0.14
365 TestNetworkPlugins/group/kindnet/DNS 0.17
366 TestNetworkPlugins/group/kindnet/Localhost 0.14
367 TestNetworkPlugins/group/kindnet/HairPin 0.14
368 TestNetworkPlugins/group/custom-flannel/Start 68.2
369 TestNetworkPlugins/group/enable-default-cni/Start 91.52
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.31
372 TestNetworkPlugins/group/custom-flannel/DNS 0.16
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.41
377 TestNetworkPlugins/group/flannel/Start 67.18
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
381 TestNetworkPlugins/group/bridge/Start 77.41
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
384 TestNetworkPlugins/group/flannel/NetCatPod 10.28
385 TestNetworkPlugins/group/flannel/DNS 0.16
386 TestNetworkPlugins/group/flannel/Localhost 0.17
387 TestNetworkPlugins/group/flannel/HairPin 0.16
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 10.29
390 TestNetworkPlugins/group/bridge/DNS 0.23
391 TestNetworkPlugins/group/bridge/Localhost 0.13
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (9.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-503320 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-503320 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.939650306s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1013 22:12:56.320017  430652 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1013 22:12:56.320103  430652 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-503320
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-503320: exit status 85 (94.271356ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-503320 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-503320 │ jenkins │ v1.37.0 │ 13 Oct 25 22:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:12:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:12:46.436775  430657 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:12:46.437031  430657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:12:46.437044  430657 out.go:374] Setting ErrFile to fd 2...
	I1013 22:12:46.437049  430657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:12:46.437416  430657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	W1013 22:12:46.437614  430657 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21724-428797/.minikube/config/config.json: open /home/jenkins/minikube-integration/21724-428797/.minikube/config/config.json: no such file or directory
	I1013 22:12:46.438187  430657 out.go:368] Setting JSON to true
	I1013 22:12:46.439331  430657 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":6903,"bootTime":1760386664,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 22:12:46.439416  430657 start.go:141] virtualization:  
	I1013 22:12:46.443858  430657 out.go:99] [download-only-503320] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1013 22:12:46.444027  430657 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball: no such file or directory
	I1013 22:12:46.444179  430657 notify.go:220] Checking for updates...
	I1013 22:12:46.447113  430657 out.go:171] MINIKUBE_LOCATION=21724
	I1013 22:12:46.450066  430657 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:12:46.453099  430657 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 22:12:46.456041  430657 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 22:12:46.458925  430657 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1013 22:12:46.464608  430657 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1013 22:12:46.464915  430657 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:12:46.486016  430657 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:12:46.486124  430657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:12:46.552744  430657 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-13 22:12:46.543598358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:12:46.552860  430657 docker.go:318] overlay module found
	I1013 22:12:46.555853  430657 out.go:99] Using the docker driver based on user configuration
	I1013 22:12:46.555939  430657 start.go:305] selected driver: docker
	I1013 22:12:46.555949  430657 start.go:925] validating driver "docker" against <nil>
	I1013 22:12:46.556055  430657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:12:46.608895  430657 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-13 22:12:46.599687812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:12:46.609064  430657 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:12:46.609345  430657 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1013 22:12:46.609503  430657 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 22:12:46.612596  430657 out.go:171] Using Docker driver with root privileges
	I1013 22:12:46.615571  430657 cni.go:84] Creating CNI manager for ""
	I1013 22:12:46.615648  430657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:12:46.615662  430657 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:12:46.615752  430657 start.go:349] cluster config:
	{Name:download-only-503320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-503320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:12:46.618613  430657 out.go:99] Starting "download-only-503320" primary control-plane node in "download-only-503320" cluster
	I1013 22:12:46.618633  430657 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:12:46.621465  430657 out.go:99] Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:12:46.621502  430657 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 22:12:46.621673  430657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:12:46.637697  430657 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1013 22:12:46.638496  430657 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1013 22:12:46.638598  430657 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1013 22:12:46.682914  430657 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1013 22:12:46.682941  430657 cache.go:58] Caching tarball of preloaded images
	I1013 22:12:46.683165  430657 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 22:12:46.687126  430657 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1013 22:12:46.687154  430657 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1013 22:12:46.769681  430657 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1013 22:12:46.769819  430657 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-503320 host does not exist
	  To start a cluster, run: "minikube start -p download-only-503320"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-503320
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (7.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-648593 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-648593 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.374425011s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (7.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1013 22:13:04.143431  430652 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1013 22:13:04.143471  430652 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-648593
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-648593: exit status 85 (93.795727ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-503320 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-503320 │ jenkins │ v1.37.0 │ 13 Oct 25 22:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 22:12 UTC │ 13 Oct 25 22:12 UTC │
	│ delete  │ -p download-only-503320                                                                                                                                                   │ download-only-503320 │ jenkins │ v1.37.0 │ 13 Oct 25 22:12 UTC │ 13 Oct 25 22:12 UTC │
	│ start   │ -o=json --download-only -p download-only-648593 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-648593 │ jenkins │ v1.37.0 │ 13 Oct 25 22:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:12:56
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:12:56.811296  430856 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:12:56.811462  430856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:12:56.811473  430856 out.go:374] Setting ErrFile to fd 2...
	I1013 22:12:56.811478  430856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:12:56.811798  430856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:12:56.812317  430856 out.go:368] Setting JSON to true
	I1013 22:12:56.813230  430856 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":6913,"bootTime":1760386664,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 22:12:56.813296  430856 start.go:141] virtualization:  
	I1013 22:12:56.816813  430856 out.go:99] [download-only-648593] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:12:56.817130  430856 notify.go:220] Checking for updates...
	I1013 22:12:56.820297  430856 out.go:171] MINIKUBE_LOCATION=21724
	I1013 22:12:56.823531  430856 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:12:56.826418  430856 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 22:12:56.829447  430856 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 22:12:56.832447  430856 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1013 22:12:56.838364  430856 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1013 22:12:56.838640  430856 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:12:56.862672  430856 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:12:56.862790  430856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:12:56.927190  430856 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-13 22:12:56.916890442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:12:56.927303  430856 docker.go:318] overlay module found
	I1013 22:12:56.930392  430856 out.go:99] Using the docker driver based on user configuration
	I1013 22:12:56.930454  430856 start.go:305] selected driver: docker
	I1013 22:12:56.930469  430856 start.go:925] validating driver "docker" against <nil>
	I1013 22:12:56.930572  430856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:12:56.984905  430856 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-13 22:12:56.975883896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:12:56.985072  430856 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:12:56.985365  430856 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1013 22:12:56.985524  430856 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 22:12:56.988650  430856 out.go:171] Using Docker driver with root privileges
	I1013 22:12:56.991510  430856 cni.go:84] Creating CNI manager for ""
	I1013 22:12:56.991581  430856 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:12:56.991597  430856 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:12:56.991678  430856 start.go:349] cluster config:
	{Name:download-only-648593 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-648593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:12:56.994653  430856 out.go:99] Starting "download-only-648593" primary control-plane node in "download-only-648593" cluster
	I1013 22:12:56.994688  430856 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:12:56.997620  430856 out.go:99] Pulling base image v0.0.48-1760363564-21724 ...
	I1013 22:12:56.997671  430856 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:12:56.997769  430856 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1013 22:12:57.014644  430856 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1013 22:12:57.014779  430856 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1013 22:12:57.014805  430856 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory, skipping pull
	I1013 22:12:57.014811  430856 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in cache, skipping pull
	I1013 22:12:57.014821  430856 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 as a tarball
	I1013 22:12:57.053867  430856 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:12:57.053896  430856 cache.go:58] Caching tarball of preloaded images
	I1013 22:12:57.054089  430856 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:12:57.057192  430856 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1013 22:12:57.057232  430856 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1013 22:12:57.154449  430856 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1013 22:12:57.154508  430856 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21724-428797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-648593 host does not exist
	  To start a cluster, run: "minikube start -p download-only-648593"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-648593
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1013 22:13:05.331646  430652 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-193732 --alsologtostderr --binary-mirror http://127.0.0.1:45831 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-193732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-193732
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-801288
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-801288: exit status 85 (77.967028ms)

                                                
                                                
-- stdout --
	* Profile "addons-801288" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-801288"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-801288
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-801288: exit status 85 (75.576135ms)

                                                
                                                
-- stdout --
	* Profile "addons-801288" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-801288"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (172.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-801288 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-801288 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m52.793580508s)
--- PASS: TestAddons/Setup (172.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-801288 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-801288 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.8s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-801288 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-801288 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [86f7740d-0196-4e9d-b013-8bd776eb1fd8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [86f7740d-0196-4e9d-b013-8bd776eb1fd8] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.0041639s
addons_test.go:694: (dbg) Run:  kubectl --context addons-801288 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-801288 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-801288 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-801288 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-801288
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-801288: (12.103185704s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-801288
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-801288
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-801288
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestCertOptions (44.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-051941 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1013 23:10:42.857099  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:10:59.783203  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:11:05.659572  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-051941 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (41.082920466s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-051941 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-051941 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-051941 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-051941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-051941
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-051941: (2.510543829s)
--- PASS: TestCertOptions (44.61s)

                                                
                                    
x
+
TestCertExpiration (248.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-896873 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-896873 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.869979026s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-896873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-896873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (22.595099324s)
helpers_test.go:175: Cleaning up "cert-expiration-896873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-896873
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-896873: (3.472228184s)
--- PASS: TestCertExpiration (248.94s)

                                                
                                    
x
+
TestForceSystemdFlag (38.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-388118 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-388118 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.354769292s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-388118 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-388118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-388118
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-388118: (2.753413553s)
--- PASS: TestForceSystemdFlag (38.42s)

                                                
                                    
x
+
TestForceSystemdEnv (39.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-255188 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-255188 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.089444742s)
helpers_test.go:175: Cleaning up "force-systemd-env-255188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-255188
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-255188: (3.096116299s)
--- PASS: TestForceSystemdEnv (39.19s)

                                                
                                    
x
+
TestErrorSpam/setup (33.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-104926 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-104926 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-104926 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-104926 --driver=docker  --container-runtime=crio: (33.838051113s)
--- PASS: TestErrorSpam/setup (33.84s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 pause: exit status 80 (1.863045814s)

                                                
                                                
-- stdout --
	* Pausing node nospam-104926 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:19:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 pause: exit status 80 (1.978074164s)

                                                
                                                
-- stdout --
	* Pausing node nospam-104926 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:20:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 pause: exit status 80 (2.157622956s)

                                                
                                                
-- stdout --
	* Pausing node nospam-104926 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:20:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.00s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 unpause: exit status 80 (1.972382614s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-104926 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:20:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 unpause: exit status 80 (1.860300342s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-104926 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:20:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 unpause: exit status 80 (1.931079073s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-104926 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:20:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.76s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 stop: (1.318409308s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-104926 --log_dir /tmp/nospam-104926 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21724-428797/.minikube/files/etc/test/nested/copy/430652/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-544242 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1013 22:20:59.786935  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:20:59.793477  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:20:59.804932  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:20:59.826425  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:20:59.867863  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:20:59.949334  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:21:00.111242  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:21:00.433000  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:21:01.075134  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:21:02.356839  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:21:04.918509  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:21:10.043068  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:21:20.285459  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-544242 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m22.33674537s)
--- PASS: TestFunctional/serial/StartWithProxy (82.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1013 22:21:38.036300  430652 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-544242 --alsologtostderr -v=8
E1013 22:21:40.766853  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-544242 --alsologtostderr -v=8: (27.663064268s)
functional_test.go:678: soft start took 27.667986809s for "functional-544242" cluster.
I1013 22:22:05.699700  430652 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (27.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-544242 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-544242 cache add registry.k8s.io/pause:3.1: (1.199364938s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-544242 cache add registry.k8s.io/pause:3.3: (1.331162362s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-544242 cache add registry.k8s.io/pause:latest: (1.124514854s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-544242 /tmp/TestFunctionalserialCacheCmdcacheadd_local4097564102/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 cache add minikube-local-cache-test:functional-544242
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 cache delete minikube-local-cache-test:functional-544242
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-544242
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.781501ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 kubectl -- --context functional-544242 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-544242 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-544242 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1013 22:22:21.728280  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-544242 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.00260028s)
functional_test.go:776: restart took 40.002691569s for "functional-544242" cluster.
I1013 22:22:53.279277  430652 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (40.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-544242 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-544242 logs: (1.439573301s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 logs --file /tmp/TestFunctionalserialLogsFileCmd4121331629/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-544242 logs --file /tmp/TestFunctionalserialLogsFileCmd4121331629/001/logs.txt: (1.507865196s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-544242 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-544242
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-544242: exit status 115 (386.217554ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32296 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-544242 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 config get cpus: exit status 14 (109.298305ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 config get cpus: exit status 14 (87.311183ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-544242 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-544242 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 457225: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-544242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-544242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (260.263953ms)

                                                
                                                
-- stdout --
	* [functional-544242] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:33:30.898625  456645 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:33:30.898875  456645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:33:30.898916  456645 out.go:374] Setting ErrFile to fd 2...
	I1013 22:33:30.898948  456645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:33:30.899342  456645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:33:30.899958  456645 out.go:368] Setting JSON to false
	I1013 22:33:30.901159  456645 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8147,"bootTime":1760386664,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 22:33:30.901356  456645 start.go:141] virtualization:  
	I1013 22:33:30.904945  456645 out.go:179] * [functional-544242] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:33:30.908084  456645 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:33:30.908160  456645 notify.go:220] Checking for updates...
	I1013 22:33:30.912064  456645 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:33:30.915056  456645 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 22:33:30.918218  456645 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 22:33:30.921128  456645 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:33:30.923993  456645 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:33:30.927505  456645 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:33:30.928148  456645 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:33:30.960621  456645 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:33:30.960745  456645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:33:31.059761  456645 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:33:31.049223769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:33:31.059868  456645 docker.go:318] overlay module found
	I1013 22:33:31.063117  456645 out.go:179] * Using the docker driver based on existing profile
	I1013 22:33:31.065952  456645 start.go:305] selected driver: docker
	I1013 22:33:31.065977  456645 start.go:925] validating driver "docker" against &{Name:functional-544242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-544242 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:33:31.066090  456645 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:33:31.069638  456645 out.go:203] 
	W1013 22:33:31.072475  456645 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1013 22:33:31.075308  456645 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-544242 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-544242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-544242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (283.980502ms)

                                                
                                                
-- stdout --
	* [functional-544242] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:33:30.631824  456568 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:33:30.632011  456568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:33:30.632025  456568 out.go:374] Setting ErrFile to fd 2...
	I1013 22:33:30.632031  456568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:33:30.633540  456568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:33:30.633986  456568 out.go:368] Setting JSON to false
	I1013 22:33:30.634911  456568 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8147,"bootTime":1760386664,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 22:33:30.634976  456568 start.go:141] virtualization:  
	I1013 22:33:30.638348  456568 out.go:179] * [functional-544242] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1013 22:33:30.641371  456568 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:33:30.641388  456568 notify.go:220] Checking for updates...
	I1013 22:33:30.647131  456568 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:33:30.649956  456568 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 22:33:30.652807  456568 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 22:33:30.655692  456568 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:33:30.659219  456568 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:33:30.662515  456568 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:33:30.663383  456568 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:33:30.707923  456568 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:33:30.708042  456568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:33:30.797518  456568 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:33:30.787246066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:33:30.797622  456568 docker.go:318] overlay module found
	I1013 22:33:30.800835  456568 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1013 22:33:30.802768  456568 start.go:305] selected driver: docker
	I1013 22:33:30.802788  456568 start.go:925] validating driver "docker" against &{Name:functional-544242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-544242 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:33:30.802902  456568 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:33:30.806689  456568 out.go:203] 
	W1013 22:33:30.809822  456568 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1013 22:33:30.812718  456568 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ef036d96-835e-4a32-bd45-950819b494e4] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006267792s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-544242 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-544242 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-544242 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-544242 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [035fcc08-eaec-4422-b4b5-f3c2898cf4c4] Pending
helpers_test.go:352: "sp-pod" [035fcc08-eaec-4422-b4b5-f3c2898cf4c4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [035fcc08-eaec-4422-b4b5-f3c2898cf4c4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003284219s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-544242 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-544242 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-544242 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [02b654fc-54f1-459f-bd16-55d35bacf0ed] Pending
helpers_test.go:352: "sp-pod" [02b654fc-54f1-459f-bd16-55d35bacf0ed] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003358223s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-544242 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh -n functional-544242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 cp functional-544242:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2928095060/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh -n functional-544242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh -n functional-544242 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/430652/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "sudo cat /etc/test/nested/copy/430652/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/430652.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "sudo cat /etc/ssl/certs/430652.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/430652.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "sudo cat /usr/share/ca-certificates/430652.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4306522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "sudo cat /etc/ssl/certs/4306522.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4306522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "sudo cat /usr/share/ca-certificates/4306522.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-544242 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 ssh "sudo systemctl is-active docker": exit status 1 (379.894654ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 ssh "sudo systemctl is-active containerd": exit status 1 (347.117016ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-544242 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-544242 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-544242 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 452750: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-544242 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-544242 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-544242 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [cbbcd3d7-e2e9-4c4c-9ea1-aeb9a914dfab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [cbbcd3d7-e2e9-4c4c-9ea1-aeb9a914dfab] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003331445s
I1013 22:23:13.047715  430652 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-544242 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.180.140 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-544242 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "361.892724ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.884527ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "385.911273ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.979751ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-544242 /tmp/TestFunctionalparallelMountCmdany-port3678645834/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760394798206484209" to /tmp/TestFunctionalparallelMountCmdany-port3678645834/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760394798206484209" to /tmp/TestFunctionalparallelMountCmdany-port3678645834/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760394798206484209" to /tmp/TestFunctionalparallelMountCmdany-port3678645834/001/test-1760394798206484209
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (342.155698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 22:33:18.549888  430652 retry.go:31] will retry after 693.962598ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 13 22:33 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 13 22:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 13 22:33 test-1760394798206484209
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh cat /mount-9p/test-1760394798206484209
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-544242 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [49dbe0f1-cd06-4ed1-b901-2c64ef4969af] Pending
helpers_test.go:352: "busybox-mount" [49dbe0f1-cd06-4ed1-b901-2c64ef4969af] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [49dbe0f1-cd06-4ed1-b901-2c64ef4969af] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [49dbe0f1-cd06-4ed1-b901-2c64ef4969af] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003178669s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-544242 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-544242 /tmp/TestFunctionalparallelMountCmdany-port3678645834/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-544242 /tmp/TestFunctionalparallelMountCmdspecific-port3954869786/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.584857ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 22:33:25.658888  430652 retry.go:31] will retry after 733.322877ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-544242 /tmp/TestFunctionalparallelMountCmdspecific-port3954869786/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 ssh "sudo umount -f /mount-9p": exit status 1 (376.256974ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-544242 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-544242 /tmp/TestFunctionalparallelMountCmdspecific-port3954869786/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-544242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1015083560/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-544242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1015083560/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-544242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1015083560/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-544242 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-544242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1015083560/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-544242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1015083560/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-544242 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1015083560/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 service list -o json
functional_test.go:1504: Took "593.572107ms" to run "out/minikube-linux-arm64 -p functional-544242 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-544242 version -o=json --components: (1.550105231s)
--- PASS: TestFunctional/parallel/Version/components (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-544242 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-544242 image ls --format short --alsologtostderr:
I1013 22:33:45.573218  458770 out.go:360] Setting OutFile to fd 1 ...
I1013 22:33:45.573427  458770 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 22:33:45.573456  458770 out.go:374] Setting ErrFile to fd 2...
I1013 22:33:45.573474  458770 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 22:33:45.573777  458770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
I1013 22:33:45.574521  458770 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 22:33:45.574704  458770 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 22:33:45.575245  458770 cli_runner.go:164] Run: docker container inspect functional-544242 --format={{.State.Status}}
I1013 22:33:45.598671  458770 ssh_runner.go:195] Run: systemctl --version
I1013 22:33:45.598734  458770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
I1013 22:33:45.631679  458770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
I1013 22:33:45.742950  458770 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-544242 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-544242 image ls --format table --alsologtostderr:
I1013 22:33:46.387740  458979 out.go:360] Setting OutFile to fd 1 ...
I1013 22:33:46.387919  458979 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 22:33:46.387945  458979 out.go:374] Setting ErrFile to fd 2...
I1013 22:33:46.387964  458979 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 22:33:46.389669  458979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
I1013 22:33:46.390387  458979 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 22:33:46.390584  458979 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 22:33:46.391092  458979 cli_runner.go:164] Run: docker container inspect functional-544242 --format={{.State.Status}}
I1013 22:33:46.408688  458979 ssh_runner.go:195] Run: systemctl --version
I1013 22:33:46.408743  458979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
I1013 22:33:46.426533  458979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
I1013 22:33:46.541730  458979 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-544242 image ls --format json --alsologtostderr:
[{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf7
8af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9
118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3
ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.
1"],"size":"75938711"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-544242 image ls --format json --alsologtostderr:
I1013 22:33:46.083268  458893 out.go:360] Setting OutFile to fd 1 ...
I1013 22:33:46.083431  458893 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 22:33:46.083438  458893 out.go:374] Setting ErrFile to fd 2...
I1013 22:33:46.083444  458893 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 22:33:46.083714  458893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
I1013 22:33:46.084315  458893 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 22:33:46.084423  458893 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 22:33:46.084862  458893 cli_runner.go:164] Run: docker container inspect functional-544242 --format={{.State.Status}}
I1013 22:33:46.120306  458893 ssh_runner.go:195] Run: systemctl --version
I1013 22:33:46.120525  458893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
I1013 22:33:46.145174  458893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
I1013 22:33:46.257772  458893 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-544242 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-544242 image ls --format yaml --alsologtostderr:
I1013 22:33:45.759442  458816 out.go:360] Setting OutFile to fd 1 ...
I1013 22:33:45.759626  458816 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 22:33:45.759633  458816 out.go:374] Setting ErrFile to fd 2...
I1013 22:33:45.759637  458816 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 22:33:45.759952  458816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
I1013 22:33:45.760750  458816 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 22:33:45.760951  458816 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 22:33:45.761578  458816 cli_runner.go:164] Run: docker container inspect functional-544242 --format={{.State.Status}}
I1013 22:33:45.785625  458816 ssh_runner.go:195] Run: systemctl --version
I1013 22:33:45.785713  458816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
I1013 22:33:45.816819  458816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
I1013 22:33:45.935470  458816 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-544242 ssh pgrep buildkitd: exit status 1 (398.93896ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image build -t localhost/my-image:functional-544242 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-544242 image build -t localhost/my-image:functional-544242 testdata/build --alsologtostderr: (3.366625359s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-544242 image build -t localhost/my-image:functional-544242 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1c3a0616a7e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-544242
--> 7db54fea2ce
Successfully tagged localhost/my-image:functional-544242
7db54fea2ce74e3266aa662b401d4c806b296c6c0a53dad0fee4aa3b85e477f2
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-544242 image build -t localhost/my-image:functional-544242 testdata/build --alsologtostderr:
I1013 22:33:46.254100  458946 out.go:360] Setting OutFile to fd 1 ...
I1013 22:33:46.254933  458946 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 22:33:46.254962  458946 out.go:374] Setting ErrFile to fd 2...
I1013 22:33:46.254981  458946 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 22:33:46.255410  458946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
I1013 22:33:46.256412  458946 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 22:33:46.257962  458946 config.go:182] Loaded profile config "functional-544242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 22:33:46.259164  458946 cli_runner.go:164] Run: docker container inspect functional-544242 --format={{.State.Status}}
I1013 22:33:46.281730  458946 ssh_runner.go:195] Run: systemctl --version
I1013 22:33:46.281786  458946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-544242
I1013 22:33:46.321581  458946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/functional-544242/id_rsa Username:docker}
I1013 22:33:46.438500  458946 build_images.go:161] Building image from path: /tmp/build.1622754061.tar
I1013 22:33:46.438563  458946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1013 22:33:46.447461  458946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1622754061.tar
I1013 22:33:46.451593  458946 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1622754061.tar: stat -c "%s %y" /var/lib/minikube/build/build.1622754061.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1622754061.tar': No such file or directory
I1013 22:33:46.451626  458946 ssh_runner.go:362] scp /tmp/build.1622754061.tar --> /var/lib/minikube/build/build.1622754061.tar (3072 bytes)
I1013 22:33:46.475951  458946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1622754061
I1013 22:33:46.484493  458946 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1622754061 -xf /var/lib/minikube/build/build.1622754061.tar
I1013 22:33:46.492625  458946 crio.go:315] Building image: /var/lib/minikube/build/build.1622754061
I1013 22:33:46.492750  458946 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-544242 /var/lib/minikube/build/build.1622754061 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1013 22:33:49.543508  458946 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-544242 /var/lib/minikube/build/build.1622754061 --cgroup-manager=cgroupfs: (3.050701878s)
I1013 22:33:49.543573  458946 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1622754061
I1013 22:33:49.551593  458946 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1622754061.tar
I1013 22:33:49.559526  458946 build_images.go:217] Built localhost/my-image:functional-544242 from /tmp/build.1622754061.tar
I1013 22:33:49.559560  458946 build_images.go:133] succeeded building to: functional-544242
I1013 22:33:49.559565  458946 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-544242
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image rm kicbase/echo-server:functional-544242 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-544242 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-544242
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-544242
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-544242
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1013 22:35:59.782641  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m24.455343127s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (205.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- rollout status deployment/busybox
E1013 22:37:22.853562  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 kubectl -- rollout status deployment/busybox: (4.644599298s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-57p4z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-k8zfp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-n574p -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-57p4z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-k8zfp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-n574p -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-57p4z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-k8zfp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-n574p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-57p4z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-57p4z -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-k8zfp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-k8zfp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-n574p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 kubectl -- exec busybox-7b57f96db7-n574p -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 node add --alsologtostderr -v 5
E1013 22:38:02.591794  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:02.598252  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:02.610056  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:02.631442  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:02.672930  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:02.754370  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:02.915860  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:03.237181  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:03.878662  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:05.160610  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:07.723151  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:12.844846  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:38:23.086727  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 node add --alsologtostderr -v 5: (1m0.575431148s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5: (1.09574095s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-548764 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.08055903s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 status --output json --alsologtostderr -v 5: (1.11228785s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp testdata/cp-test.txt ha-548764:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2514912691/001/cp-test_ha-548764.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764:/home/docker/cp-test.txt ha-548764-m02:/home/docker/cp-test_ha-548764_ha-548764-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m02 "sudo cat /home/docker/cp-test_ha-548764_ha-548764-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764:/home/docker/cp-test.txt ha-548764-m03:/home/docker/cp-test_ha-548764_ha-548764-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m03 "sudo cat /home/docker/cp-test_ha-548764_ha-548764-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764:/home/docker/cp-test.txt ha-548764-m04:/home/docker/cp-test_ha-548764_ha-548764-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m04 "sudo cat /home/docker/cp-test_ha-548764_ha-548764-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp testdata/cp-test.txt ha-548764-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2514912691/001/cp-test_ha-548764-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m02:/home/docker/cp-test.txt ha-548764:/home/docker/cp-test_ha-548764-m02_ha-548764.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764 "sudo cat /home/docker/cp-test_ha-548764-m02_ha-548764.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m02:/home/docker/cp-test.txt ha-548764-m03:/home/docker/cp-test_ha-548764-m02_ha-548764-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m03 "sudo cat /home/docker/cp-test_ha-548764-m02_ha-548764-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m02:/home/docker/cp-test.txt ha-548764-m04:/home/docker/cp-test_ha-548764-m02_ha-548764-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m04 "sudo cat /home/docker/cp-test_ha-548764-m02_ha-548764-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp testdata/cp-test.txt ha-548764-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2514912691/001/cp-test_ha-548764-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m03:/home/docker/cp-test.txt ha-548764:/home/docker/cp-test_ha-548764-m03_ha-548764.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764 "sudo cat /home/docker/cp-test_ha-548764-m03_ha-548764.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m03:/home/docker/cp-test.txt ha-548764-m02:/home/docker/cp-test_ha-548764-m03_ha-548764-m02.txt
E1013 22:38:43.568296  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m02 "sudo cat /home/docker/cp-test_ha-548764-m03_ha-548764-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m03:/home/docker/cp-test.txt ha-548764-m04:/home/docker/cp-test_ha-548764-m03_ha-548764-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m04 "sudo cat /home/docker/cp-test_ha-548764-m03_ha-548764-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp testdata/cp-test.txt ha-548764-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2514912691/001/cp-test_ha-548764-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m04:/home/docker/cp-test.txt ha-548764:/home/docker/cp-test_ha-548764-m04_ha-548764.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764 "sudo cat /home/docker/cp-test_ha-548764-m04_ha-548764.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m04:/home/docker/cp-test.txt ha-548764-m02:/home/docker/cp-test_ha-548764-m04_ha-548764-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m02 "sudo cat /home/docker/cp-test_ha-548764-m04_ha-548764-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 cp ha-548764-m04:/home/docker/cp-test.txt ha-548764-m03:/home/docker/cp-test_ha-548764-m04_ha-548764-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 ssh -n ha-548764-m03 "sudo cat /home/docker/cp-test_ha-548764-m04_ha-548764-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 node stop m02 --alsologtostderr -v 5: (12.102154028s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5: exit status 7 (784.93964ms)

                                                
                                                
-- stdout --
	ha-548764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-548764-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-548764-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-548764-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:39:02.596586  473884 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:39:02.596790  473884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:39:02.596820  473884 out.go:374] Setting ErrFile to fd 2...
	I1013 22:39:02.596840  473884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:39:02.597129  473884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:39:02.597350  473884 out.go:368] Setting JSON to false
	I1013 22:39:02.597438  473884 mustload.go:65] Loading cluster: ha-548764
	I1013 22:39:02.597513  473884 notify.go:220] Checking for updates...
	I1013 22:39:02.598704  473884 config.go:182] Loaded profile config "ha-548764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:39:02.598770  473884 status.go:174] checking status of ha-548764 ...
	I1013 22:39:02.599521  473884 cli_runner.go:164] Run: docker container inspect ha-548764 --format={{.State.Status}}
	I1013 22:39:02.630752  473884 status.go:371] ha-548764 host status = "Running" (err=<nil>)
	I1013 22:39:02.630793  473884 host.go:66] Checking if "ha-548764" exists ...
	I1013 22:39:02.631339  473884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-548764
	I1013 22:39:02.664141  473884 host.go:66] Checking if "ha-548764" exists ...
	I1013 22:39:02.664438  473884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:39:02.664482  473884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-548764
	I1013 22:39:02.690646  473884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/ha-548764/id_rsa Username:docker}
	I1013 22:39:02.793711  473884 ssh_runner.go:195] Run: systemctl --version
	I1013 22:39:02.801555  473884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:39:02.814541  473884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:39:02.874657  473884 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-13 22:39:02.864700049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:39:02.875269  473884 kubeconfig.go:125] found "ha-548764" server: "https://192.168.49.254:8443"
	I1013 22:39:02.875311  473884 api_server.go:166] Checking apiserver status ...
	I1013 22:39:02.875362  473884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:39:02.887464  473884 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1272/cgroup
	I1013 22:39:02.895676  473884 api_server.go:182] apiserver freezer: "4:freezer:/docker/ce898463db185e9d21126c01cd60fd266a42fe8e139e2771ac3b70c1132470eb/crio/crio-4bb4351cc87b6257366cdabbe16429c774e1ba330a09dc420a5567e618fe6e7e"
	I1013 22:39:02.895762  473884 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ce898463db185e9d21126c01cd60fd266a42fe8e139e2771ac3b70c1132470eb/crio/crio-4bb4351cc87b6257366cdabbe16429c774e1ba330a09dc420a5567e618fe6e7e/freezer.state
	I1013 22:39:02.904547  473884 api_server.go:204] freezer state: "THAWED"
	I1013 22:39:02.904586  473884 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1013 22:39:02.912792  473884 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1013 22:39:02.912820  473884 status.go:463] ha-548764 apiserver status = Running (err=<nil>)
	I1013 22:39:02.912832  473884 status.go:176] ha-548764 status: &{Name:ha-548764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:39:02.912848  473884 status.go:174] checking status of ha-548764-m02 ...
	I1013 22:39:02.913167  473884 cli_runner.go:164] Run: docker container inspect ha-548764-m02 --format={{.State.Status}}
	I1013 22:39:02.929612  473884 status.go:371] ha-548764-m02 host status = "Stopped" (err=<nil>)
	I1013 22:39:02.929633  473884 status.go:384] host is not running, skipping remaining checks
	I1013 22:39:02.929640  473884 status.go:176] ha-548764-m02 status: &{Name:ha-548764-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:39:02.929660  473884 status.go:174] checking status of ha-548764-m03 ...
	I1013 22:39:02.929977  473884 cli_runner.go:164] Run: docker container inspect ha-548764-m03 --format={{.State.Status}}
	I1013 22:39:02.948315  473884 status.go:371] ha-548764-m03 host status = "Running" (err=<nil>)
	I1013 22:39:02.948339  473884 host.go:66] Checking if "ha-548764-m03" exists ...
	I1013 22:39:02.948663  473884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-548764-m03
	I1013 22:39:02.976721  473884 host.go:66] Checking if "ha-548764-m03" exists ...
	I1013 22:39:02.977026  473884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:39:02.977120  473884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-548764-m03
	I1013 22:39:02.996424  473884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/ha-548764-m03/id_rsa Username:docker}
	I1013 22:39:03.101115  473884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:39:03.116640  473884 kubeconfig.go:125] found "ha-548764" server: "https://192.168.49.254:8443"
	I1013 22:39:03.116671  473884 api_server.go:166] Checking apiserver status ...
	I1013 22:39:03.116714  473884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:39:03.128257  473884 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	I1013 22:39:03.136866  473884 api_server.go:182] apiserver freezer: "4:freezer:/docker/36da418d077db51465ef0bf885bb79f90084249d883b8bf80a6937b201b30708/crio/crio-6c6e50f7c4d2f5711b6aa28402b5782dc7a5defcddf40c33f501a8a7e3971cea"
	I1013 22:39:03.136946  473884 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/36da418d077db51465ef0bf885bb79f90084249d883b8bf80a6937b201b30708/crio/crio-6c6e50f7c4d2f5711b6aa28402b5782dc7a5defcddf40c33f501a8a7e3971cea/freezer.state
	I1013 22:39:03.144922  473884 api_server.go:204] freezer state: "THAWED"
	I1013 22:39:03.144951  473884 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1013 22:39:03.153430  473884 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1013 22:39:03.153459  473884 status.go:463] ha-548764-m03 apiserver status = Running (err=<nil>)
	I1013 22:39:03.153469  473884 status.go:176] ha-548764-m03 status: &{Name:ha-548764-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:39:03.153487  473884 status.go:174] checking status of ha-548764-m04 ...
	I1013 22:39:03.153797  473884 cli_runner.go:164] Run: docker container inspect ha-548764-m04 --format={{.State.Status}}
	I1013 22:39:03.171435  473884 status.go:371] ha-548764-m04 host status = "Running" (err=<nil>)
	I1013 22:39:03.171460  473884 host.go:66] Checking if "ha-548764-m04" exists ...
	I1013 22:39:03.171756  473884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-548764-m04
	I1013 22:39:03.189704  473884 host.go:66] Checking if "ha-548764-m04" exists ...
	I1013 22:39:03.190024  473884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:39:03.190085  473884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-548764-m04
	I1013 22:39:03.207280  473884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/ha-548764-m04/id_rsa Username:docker}
	I1013 22:39:03.308422  473884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:39:03.323182  473884 status.go:176] ha-548764-m04 status: &{Name:ha-548764-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (26.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 node start m02 --alsologtostderr -v 5
E1013 22:39:24.529889  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 node start m02 --alsologtostderr -v 5: (24.924375995s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5: (1.370555744s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (26.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.195075851s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 stop --alsologtostderr -v 5: (27.295004538s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 start --wait true --alsologtostderr -v 5
E1013 22:40:46.453854  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:40:59.783107  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 start --wait true --alsologtostderr -v 5: (1m32.504770479s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 node delete m03 --alsologtostderr -v 5: (10.936046315s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 stop --alsologtostderr -v 5: (35.874245724s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5: exit status 7 (119.344908ms)

                                                
                                                
-- stdout --
	ha-548764
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-548764-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-548764-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:42:20.311904  485716 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:42:20.312338  485716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:42:20.312352  485716 out.go:374] Setting ErrFile to fd 2...
	I1013 22:42:20.312357  485716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:42:20.312699  485716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:42:20.312935  485716 out.go:368] Setting JSON to false
	I1013 22:42:20.312984  485716 mustload.go:65] Loading cluster: ha-548764
	I1013 22:42:20.313074  485716 notify.go:220] Checking for updates...
	I1013 22:42:20.313451  485716 config.go:182] Loaded profile config "ha-548764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:42:20.313472  485716 status.go:174] checking status of ha-548764 ...
	I1013 22:42:20.314063  485716 cli_runner.go:164] Run: docker container inspect ha-548764 --format={{.State.Status}}
	I1013 22:42:20.333215  485716 status.go:371] ha-548764 host status = "Stopped" (err=<nil>)
	I1013 22:42:20.333240  485716 status.go:384] host is not running, skipping remaining checks
	I1013 22:42:20.333247  485716 status.go:176] ha-548764 status: &{Name:ha-548764 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:42:20.333273  485716 status.go:174] checking status of ha-548764-m02 ...
	I1013 22:42:20.333593  485716 cli_runner.go:164] Run: docker container inspect ha-548764-m02 --format={{.State.Status}}
	I1013 22:42:20.352045  485716 status.go:371] ha-548764-m02 host status = "Stopped" (err=<nil>)
	I1013 22:42:20.352071  485716 status.go:384] host is not running, skipping remaining checks
	I1013 22:42:20.352077  485716 status.go:176] ha-548764-m02 status: &{Name:ha-548764-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:42:20.352097  485716 status.go:174] checking status of ha-548764-m04 ...
	I1013 22:42:20.352411  485716 cli_runner.go:164] Run: docker container inspect ha-548764-m04 --format={{.State.Status}}
	I1013 22:42:20.374515  485716 status.go:371] ha-548764-m04 host status = "Stopped" (err=<nil>)
	I1013 22:42:20.374538  485716 status.go:384] host is not running, skipping remaining checks
	I1013 22:42:20.374545  485716 status.go:176] ha-548764-m04 status: &{Name:ha-548764-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (90.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1013 22:43:02.591262  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:43:30.296340  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m29.085238177s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (90.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 node add --control-plane --alsologtostderr -v 5: (1m17.838236109s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-548764 status --alsologtostderr -v 5: (1.118493731s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.071669488s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-381214 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1013 22:45:59.783641  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-381214 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m18.169455306s)
--- PASS: TestJSONOutput/start/Command (78.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-381214 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-381214 --output=json --user=testUser: (5.8567539s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-056567 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-056567 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (99.087951ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dd738489-8f0a-4435-95db-1b199ff8ab7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-056567] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2228ccb-c336-441f-ab12-7558da813c8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"77d07f99-ba6c-468a-8a1b-0b9dc8694aca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ab5bc502-68f9-4c97-a409-c89de2dcd5eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig"}}
	{"specversion":"1.0","id":"7d7786b6-9a94-4568-b898-f675a5454fb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube"}}
	{"specversion":"1.0","id":"3e328f03-c951-4482-a590-fa4884c04651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8026ca42-e49b-4154-ae89-45a2045de227","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"80d7cce9-4f8c-4c1e-b54a-f2a65a4e2c7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-056567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-056567
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-004095 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-004095 --network=: (41.880392016s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-004095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-004095
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-004095: (2.172615928s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (39.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-965769 --network=bridge
E1013 22:48:02.591250  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-965769 --network=bridge: (37.771231311s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-965769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-965769
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-965769: (2.040912843s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (39.85s)

                                                
                                    
x
+
TestKicExistingNetwork (36.8s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1013 22:48:17.131856  430652 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1013 22:48:17.150478  430652 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1013 22:48:17.150575  430652 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1013 22:48:17.150595  430652 cli_runner.go:164] Run: docker network inspect existing-network
W1013 22:48:17.167999  430652 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1013 22:48:17.168030  430652 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1013 22:48:17.168044  430652 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1013 22:48:17.168162  430652 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1013 22:48:17.185222  430652 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-daf8f67114ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:2a:b3:49:6d:63} reservation:<nil>}
I1013 22:48:17.185531  430652 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001dbcd40}
I1013 22:48:17.185551  430652 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1013 22:48:17.185605  430652 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1013 22:48:17.239185  430652 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-209604 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-209604 --network=existing-network: (34.568500149s)
helpers_test.go:175: Cleaning up "existing-network-209604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-209604
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-209604: (2.090057517s)
I1013 22:48:53.915624  430652 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.80s)

                                                
                                    
x
+
TestKicCustomSubnet (34.75s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-766144 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-766144 --subnet=192.168.60.0/24: (32.464309606s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-766144 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-766144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-766144
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-766144: (2.262240882s)
--- PASS: TestKicCustomSubnet (34.75s)

                                                
                                    
x
+
TestKicStaticIP (36.83s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-747167 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-747167 --static-ip=192.168.200.200: (34.52341348s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-747167 ip
helpers_test.go:175: Cleaning up "static-ip-747167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-747167
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-747167: (2.147619573s)
--- PASS: TestKicStaticIP (36.83s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (75.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-807637 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-807637 --driver=docker  --container-runtime=crio: (34.147207754s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-810000 --driver=docker  --container-runtime=crio
E1013 22:50:59.786974  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-810000 --driver=docker  --container-runtime=crio: (36.178412141s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-807637
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-810000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-810000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-810000
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-810000: (2.148105341s)
helpers_test.go:175: Cleaning up "first-807637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-807637
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-807637: (2.032081976s)
--- PASS: TestMinikubeProfile (75.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-316749 --memory=3072 --mount-string /tmp/TestMountStartserial2911708504/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-316749 --memory=3072 --mount-string /tmp/TestMountStartserial2911708504/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.691109725s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-316749 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-318702 --memory=3072 --mount-string /tmp/TestMountStartserial2911708504/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-318702 --memory=3072 --mount-string /tmp/TestMountStartserial2911708504/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.488598768s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-318702 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-316749 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-316749 --alsologtostderr -v=5: (1.707235004s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-318702 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-318702
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-318702: (1.299965564s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-318702
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-318702: (6.941783616s)
--- PASS: TestMountStart/serial/RestartStopped (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-318702 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (132.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-819893 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1013 22:53:02.592197  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:54:02.855706  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-819893 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m12.403310308s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (132.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-819893 -- rollout status deployment/busybox: (3.098018453s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- exec busybox-7b57f96db7-88slj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- exec busybox-7b57f96db7-zqcpt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- exec busybox-7b57f96db7-88slj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- exec busybox-7b57f96db7-zqcpt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- exec busybox-7b57f96db7-88slj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- exec busybox-7b57f96db7-zqcpt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- exec busybox-7b57f96db7-88slj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- exec busybox-7b57f96db7-88slj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- exec busybox-7b57f96db7-zqcpt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-819893 -- exec busybox-7b57f96db7-zqcpt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-819893 -v=5 --alsologtostderr
E1013 22:54:25.658267  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-819893 -v=5 --alsologtostderr: (58.760973281s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.44s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-819893 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp testdata/cp-test.txt multinode-819893:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp multinode-819893:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile929076958/001/cp-test_multinode-819893.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp multinode-819893:/home/docker/cp-test.txt multinode-819893-m02:/home/docker/cp-test_multinode-819893_multinode-819893-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m02 "sudo cat /home/docker/cp-test_multinode-819893_multinode-819893-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp multinode-819893:/home/docker/cp-test.txt multinode-819893-m03:/home/docker/cp-test_multinode-819893_multinode-819893-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m03 "sudo cat /home/docker/cp-test_multinode-819893_multinode-819893-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp testdata/cp-test.txt multinode-819893-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp multinode-819893-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile929076958/001/cp-test_multinode-819893-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp multinode-819893-m02:/home/docker/cp-test.txt multinode-819893:/home/docker/cp-test_multinode-819893-m02_multinode-819893.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893 "sudo cat /home/docker/cp-test_multinode-819893-m02_multinode-819893.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp multinode-819893-m02:/home/docker/cp-test.txt multinode-819893-m03:/home/docker/cp-test_multinode-819893-m02_multinode-819893-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m03 "sudo cat /home/docker/cp-test_multinode-819893-m02_multinode-819893-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp testdata/cp-test.txt multinode-819893-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp multinode-819893-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile929076958/001/cp-test_multinode-819893-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp multinode-819893-m03:/home/docker/cp-test.txt multinode-819893:/home/docker/cp-test_multinode-819893-m03_multinode-819893.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893 "sudo cat /home/docker/cp-test_multinode-819893-m03_multinode-819893.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 cp multinode-819893-m03:/home/docker/cp-test.txt multinode-819893-m02:/home/docker/cp-test_multinode-819893-m03_multinode-819893-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 ssh -n multinode-819893-m02 "sudo cat /home/docker/cp-test_multinode-819893-m03_multinode-819893-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-819893 node stop m03: (1.328829603s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-819893 status: exit status 7 (542.894714ms)

                                                
                                                
-- stdout --
	multinode-819893
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-819893-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-819893-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-819893 status --alsologtostderr: exit status 7 (542.558566ms)

                                                
                                                
-- stdout --
	multinode-819893
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-819893-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-819893-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:55:24.064033  536118 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:55:24.064314  536118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:55:24.064342  536118 out.go:374] Setting ErrFile to fd 2...
	I1013 22:55:24.064378  536118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:55:24.064775  536118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:55:24.065051  536118 out.go:368] Setting JSON to false
	I1013 22:55:24.065123  536118 mustload.go:65] Loading cluster: multinode-819893
	I1013 22:55:24.065210  536118 notify.go:220] Checking for updates...
	I1013 22:55:24.065714  536118 config.go:182] Loaded profile config "multinode-819893": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:55:24.065754  536118 status.go:174] checking status of multinode-819893 ...
	I1013 22:55:24.066494  536118 cli_runner.go:164] Run: docker container inspect multinode-819893 --format={{.State.Status}}
	I1013 22:55:24.086728  536118 status.go:371] multinode-819893 host status = "Running" (err=<nil>)
	I1013 22:55:24.086751  536118 host.go:66] Checking if "multinode-819893" exists ...
	I1013 22:55:24.087049  536118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-819893
	I1013 22:55:24.111216  536118 host.go:66] Checking if "multinode-819893" exists ...
	I1013 22:55:24.111571  536118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:55:24.111667  536118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-819893
	I1013 22:55:24.129737  536118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/multinode-819893/id_rsa Username:docker}
	I1013 22:55:24.232917  536118 ssh_runner.go:195] Run: systemctl --version
	I1013 22:55:24.240349  536118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:55:24.254513  536118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:55:24.313157  536118 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 22:55:24.302866714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:55:24.313755  536118 kubeconfig.go:125] found "multinode-819893" server: "https://192.168.67.2:8443"
	I1013 22:55:24.313795  536118 api_server.go:166] Checking apiserver status ...
	I1013 22:55:24.313843  536118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:55:24.326028  536118 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	I1013 22:55:24.334773  536118 api_server.go:182] apiserver freezer: "4:freezer:/docker/63faf7fc63de2fa28168427c0682645d58356a9becac36ad8efbfd64e815ea97/crio/crio-47004f321483c2d51d7f2d75f0e867dbfcad52b9ae2e4f8646d6538096a18e9b"
	I1013 22:55:24.334847  536118 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/63faf7fc63de2fa28168427c0682645d58356a9becac36ad8efbfd64e815ea97/crio/crio-47004f321483c2d51d7f2d75f0e867dbfcad52b9ae2e4f8646d6538096a18e9b/freezer.state
	I1013 22:55:24.343061  536118 api_server.go:204] freezer state: "THAWED"
	I1013 22:55:24.343217  536118 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1013 22:55:24.351617  536118 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1013 22:55:24.351647  536118 status.go:463] multinode-819893 apiserver status = Running (err=<nil>)
	I1013 22:55:24.351658  536118 status.go:176] multinode-819893 status: &{Name:multinode-819893 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:55:24.351701  536118 status.go:174] checking status of multinode-819893-m02 ...
	I1013 22:55:24.352015  536118 cli_runner.go:164] Run: docker container inspect multinode-819893-m02 --format={{.State.Status}}
	I1013 22:55:24.369893  536118 status.go:371] multinode-819893-m02 host status = "Running" (err=<nil>)
	I1013 22:55:24.369920  536118 host.go:66] Checking if "multinode-819893-m02" exists ...
	I1013 22:55:24.370238  536118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-819893-m02
	I1013 22:55:24.388368  536118 host.go:66] Checking if "multinode-819893-m02" exists ...
	I1013 22:55:24.388676  536118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:55:24.388742  536118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-819893-m02
	I1013 22:55:24.411436  536118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33304 SSHKeyPath:/home/jenkins/minikube-integration/21724-428797/.minikube/machines/multinode-819893-m02/id_rsa Username:docker}
	I1013 22:55:24.512370  536118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:55:24.526173  536118 status.go:176] multinode-819893-m02 status: &{Name:multinode-819893-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:55:24.526209  536118 status.go:174] checking status of multinode-819893-m03 ...
	I1013 22:55:24.526595  536118 cli_runner.go:164] Run: docker container inspect multinode-819893-m03 --format={{.State.Status}}
	I1013 22:55:24.543925  536118 status.go:371] multinode-819893-m03 host status = "Stopped" (err=<nil>)
	I1013 22:55:24.543949  536118 status.go:384] host is not running, skipping remaining checks
	I1013 22:55:24.543956  536118 status.go:176] multinode-819893-m03 status: &{Name:multinode-819893-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-819893 node start m03 -v=5 --alsologtostderr: (7.232291488s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-819893
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-819893
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-819893: (25.127837493s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-819893 --wait=true -v=5 --alsologtostderr
E1013 22:55:59.783532  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-819893 --wait=true -v=5 --alsologtostderr: (47.316479084s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-819893
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-819893 node delete m03: (5.093793098s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-819893 stop: (23.853346264s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-819893 status: exit status 7 (97.52393ms)

                                                
                                                
-- stdout --
	multinode-819893
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-819893-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-819893 status --alsologtostderr: exit status 7 (112.06021ms)

                                                
                                                
-- stdout --
	multinode-819893
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-819893-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:57:14.976084  543867 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:57:14.976263  543867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:57:14.976293  543867 out.go:374] Setting ErrFile to fd 2...
	I1013 22:57:14.976316  543867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:57:14.976961  543867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 22:57:14.977392  543867 out.go:368] Setting JSON to false
	I1013 22:57:14.977449  543867 mustload.go:65] Loading cluster: multinode-819893
	I1013 22:57:14.977552  543867 notify.go:220] Checking for updates...
	I1013 22:57:14.977928  543867 config.go:182] Loaded profile config "multinode-819893": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:57:14.977949  543867 status.go:174] checking status of multinode-819893 ...
	I1013 22:57:14.978925  543867 cli_runner.go:164] Run: docker container inspect multinode-819893 --format={{.State.Status}}
	I1013 22:57:14.997791  543867 status.go:371] multinode-819893 host status = "Stopped" (err=<nil>)
	I1013 22:57:14.997818  543867 status.go:384] host is not running, skipping remaining checks
	I1013 22:57:14.997825  543867 status.go:176] multinode-819893 status: &{Name:multinode-819893 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:57:14.997861  543867 status.go:174] checking status of multinode-819893-m02 ...
	I1013 22:57:14.998170  543867 cli_runner.go:164] Run: docker container inspect multinode-819893-m02 --format={{.State.Status}}
	I1013 22:57:15.040953  543867 status.go:371] multinode-819893-m02 host status = "Stopped" (err=<nil>)
	I1013 22:57:15.041042  543867 status.go:384] host is not running, skipping remaining checks
	I1013 22:57:15.041065  543867 status.go:176] multinode-819893-m02 status: &{Name:multinode-819893-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-819893 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1013 22:58:02.591452  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-819893 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.965881799s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-819893 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-819893
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-819893-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-819893-m02 --driver=docker  --container-runtime=crio: exit status 14 (93.214467ms)

                                                
                                                
-- stdout --
	* [multinode-819893-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-819893-m02' is duplicated with machine name 'multinode-819893-m02' in profile 'multinode-819893'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-819893-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-819893-m03 --driver=docker  --container-runtime=crio: (37.796312293s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-819893
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-819893: exit status 80 (396.460762ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-819893 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-819893-m03 already exists in multinode-819893-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-819893-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-819893-m03: (2.08242146s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.45s)

                                                
                                    
x
+
TestPreload (137.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-652215 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-652215 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.866163289s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-652215 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-652215 image pull gcr.io/k8s-minikube/busybox: (2.364602576s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-652215
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-652215: (5.932065334s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-652215 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1013 23:00:59.783041  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-652215 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m3.249196132s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-652215 image list
helpers_test.go:175: Cleaning up "test-preload-652215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-652215
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-652215: (2.486645402s)
--- PASS: TestPreload (137.13s)

                                                
                                    
x
+
TestScheduledStopUnix (108.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-981969 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-981969 --memory=3072 --driver=docker  --container-runtime=crio: (32.493117977s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-981969 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-981969 -n scheduled-stop-981969
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-981969 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1013 23:01:42.694736  430652 retry.go:31] will retry after 79.635µs: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.695845  430652 retry.go:31] will retry after 182.294µs: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.696110  430652 retry.go:31] will retry after 308.13µs: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.697230  430652 retry.go:31] will retry after 341.328µs: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.698346  430652 retry.go:31] will retry after 630.946µs: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.699468  430652 retry.go:31] will retry after 1.069119ms: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.701617  430652 retry.go:31] will retry after 1.59041ms: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.703810  430652 retry.go:31] will retry after 2.426052ms: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.707001  430652 retry.go:31] will retry after 1.839474ms: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.709190  430652 retry.go:31] will retry after 4.876209ms: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.714398  430652 retry.go:31] will retry after 3.084402ms: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.718624  430652 retry.go:31] will retry after 12.283888ms: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.731596  430652 retry.go:31] will retry after 7.96592ms: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.740236  430652 retry.go:31] will retry after 24.668112ms: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
I1013 23:01:42.765465  430652 retry.go:31] will retry after 30.74269ms: open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/scheduled-stop-981969/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-981969 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-981969 -n scheduled-stop-981969
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-981969
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-981969 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-981969
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-981969: exit status 7 (73.028283ms)

                                                
                                                
-- stdout --
	scheduled-stop-981969
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-981969 -n scheduled-stop-981969
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-981969 -n scheduled-stop-981969: exit status 7 (71.927536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-981969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-981969
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-981969: (4.78461042s)
--- PASS: TestScheduledStopUnix (108.95s)

                                                
                                    
x
+
TestInsufficientStorage (11.38s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-438038 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1013 23:03:02.591932  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-438038 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.819796448s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a8823f2a-c7cd-4638-8ade-9a8a3c01ffcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-438038] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"98d44395-3cb5-4038-a050-dce0d397bbc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"acbaad36-72a4-4195-a85e-063edf017d9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f9c29bc1-0e4f-4042-8ccd-d854cb9b91bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig"}}
	{"specversion":"1.0","id":"d12c20c5-69b6-4368-abef-5a5fba571772","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube"}}
	{"specversion":"1.0","id":"6e2dbebc-d8c4-4336-a334-32eb937b8fb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7f004e82-cfb6-4c8d-a9a0-b006f293b226","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8a2b13a2-a259-44c4-b8d8-b1da39cbeebe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c8a18da0-74f7-4e0b-a77d-f7dfca5f0145","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"364c9e6b-ea1d-42d0-a801-88a6cf92d958","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ae460ce-423c-4aa3-8153-4cad84fc292a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"499dc7c2-404c-4699-989b-d7d500497671","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-438038\" primary control-plane node in \"insufficient-storage-438038\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1764f7c6-f5bc-4ae0-af29-dd407de0ed85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760363564-21724 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd9d8c07-3959-4d19-b4c7-0db78e4a96fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b40ca0ae-88ed-4c7e-9bc2-19ff14fa2a26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-438038 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-438038 --output=json --layout=cluster: exit status 7 (296.794962ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-438038","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-438038","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 23:03:07.658946  560118 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-438038" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-438038 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-438038 --output=json --layout=cluster: exit status 7 (302.787183ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-438038","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-438038","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 23:03:07.960925  560185 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-438038" does not appear in /home/jenkins/minikube-integration/21724-428797/kubeconfig
	E1013 23:03:07.971446  560185 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/insufficient-storage-438038/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-438038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-438038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-438038: (1.960585763s)
--- PASS: TestInsufficientStorage (11.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.146275334 start -p running-upgrade-276330 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.146275334 start -p running-upgrade-276330 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.580927338s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-276330 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-276330 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.459339131s)
helpers_test.go:175: Cleaning up "running-upgrade-276330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-276330
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-276330: (2.090426016s)
--- PASS: TestRunningBinaryUpgrade (53.88s)

                                                
                                    
x
+
TestKubernetesUpgrade (364.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.668412157s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-211312
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-211312: (1.484782183s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-211312 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-211312 status --format={{.Host}}: exit status 7 (102.874313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m41.237142535s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-211312 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (157.181413ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-211312] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-211312
	    minikube start -p kubernetes-upgrade-211312 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2113122 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-211312 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-211312 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.533685211s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-211312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-211312
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-211312: (2.587934522s)
--- PASS: TestKubernetesUpgrade (364.92s)

                                                
                                    
x
+
TestMissingContainerUpgrade (116.64s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3233019428 start -p missing-upgrade-354983 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3233019428 start -p missing-upgrade-354983 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.859628753s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-354983
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-354983
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-354983 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-354983 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.118584163s)
helpers_test.go:175: Cleaning up "missing-upgrade-354983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-354983
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-354983: (2.549033062s)
--- PASS: TestMissingContainerUpgrade (116.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-762540 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-762540 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (98.507126ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-762540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-762540 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-762540 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (47.26810898s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-762540 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-762540 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-762540 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (17.411523166s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-762540 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-762540 status -o json: exit status 2 (445.485443ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-762540","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-762540
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-762540: (2.627319068s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-762540 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-762540 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.721469014s)
--- PASS: TestNoKubernetes/serial/Start (5.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-762540 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-762540 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.659339ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-762540
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-762540: (1.300414816s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-762540 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-762540 --driver=docker  --container-runtime=crio: (6.950149311s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-762540 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-762540 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.63245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (60.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3740206871 start -p stopped-upgrade-633601 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3740206871 start -p stopped-upgrade-633601 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.225305556s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3740206871 -p stopped-upgrade-633601 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3740206871 -p stopped-upgrade-633601 stop: (1.249528939s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-633601 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1013 23:05:59.782882  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-633601 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.302938403s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (60.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-633601
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-633601: (1.157472489s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestPause/serial/Start (80.6s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-836584 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1013 23:08:02.591252  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-836584 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m20.598012592s)
--- PASS: TestPause/serial/Start (80.60s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.14s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-836584 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-836584 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.122882085s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (32.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-557095 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-557095 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (186.621306ms)

                                                
                                                
-- stdout --
	* [false-557095] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 23:09:50.805555  597088 out.go:360] Setting OutFile to fd 1 ...
	I1013 23:09:50.805776  597088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:09:50.805803  597088 out.go:374] Setting ErrFile to fd 2...
	I1013 23:09:50.805823  597088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 23:09:50.806116  597088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-428797/.minikube/bin
	I1013 23:09:50.806583  597088 out.go:368] Setting JSON to false
	I1013 23:09:50.807561  597088 start.go:131] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10327,"bootTime":1760386664,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1013 23:09:50.807667  597088 start.go:141] virtualization:  
	I1013 23:09:50.811504  597088 out.go:179] * [false-557095] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 23:09:50.814892  597088 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 23:09:50.814970  597088 notify.go:220] Checking for updates...
	I1013 23:09:50.818589  597088 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 23:09:50.821684  597088 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-428797/kubeconfig
	I1013 23:09:50.824672  597088 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-428797/.minikube
	I1013 23:09:50.827642  597088 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 23:09:50.830539  597088 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 23:09:50.833862  597088 config.go:182] Loaded profile config "kubernetes-upgrade-211312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 23:09:50.834006  597088 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 23:09:50.864084  597088 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 23:09:50.864218  597088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 23:09:50.922871  597088 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 23:09:50.913607307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 23:09:50.922985  597088 docker.go:318] overlay module found
	I1013 23:09:50.928579  597088 out.go:179] * Using the docker driver based on user configuration
	I1013 23:09:50.931471  597088 start.go:305] selected driver: docker
	I1013 23:09:50.931493  597088 start.go:925] validating driver "docker" against <nil>
	I1013 23:09:50.931509  597088 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 23:09:50.935164  597088 out.go:203] 
	W1013 23:09:50.938016  597088 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1013 23:09:50.940980  597088 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-557095 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-557095" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-557095" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 13 Oct 2025 23:05:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-211312
contexts:
- context:
cluster: kubernetes-upgrade-211312
user: kubernetes-upgrade-211312
name: kubernetes-upgrade-211312
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-211312
user:
client-certificate: /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/kubernetes-upgrade-211312/client.crt
client-key: /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/kubernetes-upgrade-211312/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-557095

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557095"

                                                
                                                
----------------------- debugLogs end: false-557095 [took: 5.265773587s] --------------------------------
helpers_test.go:175: Cleaning up "false-557095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-557095
--- PASS: TestNetworkPlugins/group/false (5.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (69.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m9.965706708s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (69.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-670275 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4c719ad8-a8c2-4e6e-8edd-4d24c1c9eba0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4c719ad8-a8c2-4e6e-8edd-4d24c1c9eba0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.010620125s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-670275 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-670275 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-670275 --alsologtostderr -v=3: (12.150087654s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670275 -n old-k8s-version-670275
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670275 -n old-k8s-version-670275: exit status 7 (177.240887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-670275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1013 23:13:02.591295  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-670275 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.600770751s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670275 -n old-k8s-version-670275
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gg5tp" [86728516-6908-4c5c-91e7-e39eb9a82389] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gg5tp" [86728516-6908-4c5c-91e7-e39eb9a82389] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.003839701s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gg5tp" [86728516-6908-4c5c-91e7-e39eb9a82389] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003451629s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-670275 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-670275 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (87.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m27.94006458s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (87.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m28.189058448s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-985461 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9c064996-48ad-4fe6-af64-76040f212388] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9c064996-48ad-4fe6-af64-76040f212388] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003773896s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-985461 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-985461 --alsologtostderr -v=3
E1013 23:15:59.783213  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-985461 --alsologtostderr -v=3: (12.047357396s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-985461 -n no-preload-985461
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-985461 -n no-preload-985461: exit status 7 (72.95678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-985461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-985461 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.019870241s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-985461 -n no-preload-985461
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-505482 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [86067663-4b7a-4a32-b34b-a4256970748a] Pending
helpers_test.go:352: "busybox" [86067663-4b7a-4a32-b34b-a4256970748a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [86067663-4b7a-4a32-b34b-a4256970748a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00472507s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-505482 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-505482 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-505482 --alsologtostderr -v=3: (12.272302227s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-505482 -n embed-certs-505482
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-505482 -n embed-certs-505482: exit status 7 (103.654555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-505482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-505482 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.922461086s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-505482 -n embed-certs-505482
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xr9sp" [6b01b93b-d66c-4fcd-9efa-efc7d955f5b3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003534301s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xr9sp" [6b01b93b-d66c-4fcd-9efa-efc7d955f5b3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003791184s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-985461 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-985461 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1013 23:17:36.281848  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:17:36.288851  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:17:36.300341  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:17:36.321652  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:17:36.362992  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:17:36.445315  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:17:36.606701  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:17:36.928497  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:17:37.570774  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m27.048017608s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6dnwb" [a22d237a-c2a5-46ab-805f-ae6fbea82083] Running
E1013 23:17:38.852234  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:17:41.414235  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003184263s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6dnwb" [a22d237a-c2a5-46ab-805f-ae6fbea82083] Running
E1013 23:17:46.536319  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003616707s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-505482 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-505482 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1013 23:18:02.591540  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/functional-544242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:18:17.258961  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.467394825s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-041709 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-041709 --alsologtostderr -v=3: (1.5406543s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-041709 -n newest-cni-041709
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-041709 -n newest-cni-041709: exit status 7 (86.664669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-041709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-041709 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (17.140302369s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-041709 -n newest-cni-041709
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-033746 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3ac75256-6e21-451f-a3a2-d6c2cfb61938] Pending
helpers_test.go:352: "busybox" [3ac75256-6e21-451f-a3a2-d6c2cfb61938] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3ac75256-6e21-451f-a3a2-d6c2cfb61938] Running
E1013 23:18:58.220482  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003029809s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-033746 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-033746 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-033746 --alsologtostderr -v=3: (13.899968408s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-041709 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.843393724s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746: exit status 7 (69.208067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-033746 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-033746 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.078907186s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-033746 -n default-k8s-diff-port-033746
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (31.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gck5m" [c2f669b4-c484-405f-af7d-dc46ff376baf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1013 23:20:20.141816  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gck5m" [c2f669b4-c484-405f-af7d-dc46ff376baf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 31.01184739s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (31.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-557095 "pgrep -a kubelet"
I1013 23:20:38.798868  430652 config.go:182] Loaded profile config "auto-557095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-557095 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kbmg5" [11684e82-7c82-4cb8-b54d-c7373cfc4b9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kbmg5" [11684e82-7c82-4cb8-b54d-c7373cfc4b9a] Running
E1013 23:20:44.936986  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:20:44.943346  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:20:44.954727  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:20:44.976106  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:20:45.019306  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:20:45.101718  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:20:45.264606  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003762953s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gck5m" [c2f669b4-c484-405f-af7d-dc46ff376baf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005374192s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-033746 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-033746 image list --format=json
E1013 23:20:45.586475  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-557095 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1013 23:20:59.783188  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:21:05.434607  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m23.666304389s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1013 23:21:25.916299  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:22:06.878092  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.426690778s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-g7qfq" [91b09a34-ae54-41df-a6d0-a9b26010fe1f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-g7qfq" [91b09a34-ae54-41df-a6d0-a9b26010fe1f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003780111s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-97j6q" [3d13d1e9-5554-4d7b-9a3c-1e3f6d0a61f7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003719742s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-557095 "pgrep -a kubelet"
I1013 23:22:24.533689  430652 config.go:182] Loaded profile config "calico-557095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-557095 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nkmcr" [cbfb4ff2-2701-4260-b1e6-2ba938f369d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nkmcr" [cbfb4ff2-2701-4260-b1e6-2ba938f369d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.00343997s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-557095 "pgrep -a kubelet"
I1013 23:22:26.674194  430652 config.go:182] Loaded profile config "kindnet-557095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-557095 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cqr26" [a06732fb-ec74-422e-b543-558dc2e5e558] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cqr26" [a06732fb-ec74-422e-b543-558dc2e5e558] Running
E1013 23:22:36.282193  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/old-k8s-version-670275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003982473s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-557095 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-557095 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m8.196644403s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1013 23:23:28.800235  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:49.731991  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:49.738321  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:49.749686  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:49.771781  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:49.813294  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:49.895556  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:50.056919  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:50.378418  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:51.020577  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:52.302695  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:54.864409  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:23:59.985847  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:24:10.227227  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/default-k8s-diff-port-033746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m31.517890282s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-557095 "pgrep -a kubelet"
I1013 23:24:12.898727  430652 config.go:182] Loaded profile config "custom-flannel-557095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-557095 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7l8x5" [00170682-6207-47bf-a289-fdb35feceee3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7l8x5" [00170682-6207-47bf-a289-fdb35feceee3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003399798s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-557095 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-557095 "pgrep -a kubelet"
I1013 23:24:37.860639  430652 config.go:182] Loaded profile config "enable-default-cni-557095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-557095 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dkbmf" [ae88c51a-13c3-4553-950a-cf5c5958525c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dkbmf" [ae88c51a-13c3-4553-950a-cf5c5958525c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004231621s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.18413085s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-557095 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1013 23:25:39.054063  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:39.060521  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:39.071944  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:39.093332  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:39.134704  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:39.216101  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:39.377582  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:39.699219  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:40.341516  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:41.623782  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:44.185412  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:44.937475  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/no-preload-985461/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 23:25:49.307489  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-557095 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m17.412012425s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-qbm9p" [5f0b0eae-8c06-47e1-acb8-4bfa86afa2c1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00388812s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-557095 "pgrep -a kubelet"
I1013 23:25:59.494629  430652 config.go:182] Loaded profile config "flannel-557095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-557095 replace --force -f testdata/netcat-deployment.yaml
E1013 23:25:59.549149  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/auto-557095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vnrnw" [f3e2bbb6-75b5-4361-add5-f6776d1829a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1013 23:25:59.782703  430652 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/addons-801288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-vnrnw" [f3e2bbb6-75b5-4361-add5-f6776d1829a5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004391527s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-557095 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-557095 "pgrep -a kubelet"
I1013 23:26:34.944110  430652 config.go:182] Loaded profile config "bridge-557095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-557095 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rm4lc" [b49d7040-f05f-4122-8fcd-1919f044d917] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rm4lc" [b49d7040-f05f-4122-8fcd-1919f044d917] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.019556309s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-557095 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-557095 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-659560 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-659560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-659560
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-320520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-320520
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-557095 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-557095" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-557095" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 13 Oct 2025 23:05:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-211312
contexts:
- context:
cluster: kubernetes-upgrade-211312
user: kubernetes-upgrade-211312
name: kubernetes-upgrade-211312
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-211312
user:
client-certificate: /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/kubernetes-upgrade-211312/client.crt
client-key: /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/kubernetes-upgrade-211312/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-557095

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557095"

                                                
                                                
----------------------- debugLogs end: kubenet-557095 [took: 4.458631367s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-557095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-557095
--- SKIP: TestNetworkPlugins/group/kubenet (4.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-557095 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-557095" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21724-428797/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 13 Oct 2025 23:05:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-211312
contexts:
- context:
cluster: kubernetes-upgrade-211312
user: kubernetes-upgrade-211312
name: kubernetes-upgrade-211312
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-211312
user:
client-certificate: /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/kubernetes-upgrade-211312/client.crt
client-key: /home/jenkins/minikube-integration/21724-428797/.minikube/profiles/kubernetes-upgrade-211312/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-557095

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-557095" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557095"

                                                
                                                
----------------------- debugLogs end: cilium-557095 [took: 5.639909919s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-557095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-557095
--- SKIP: TestNetworkPlugins/group/cilium (5.87s)

                                                
                                    
Copied to clipboard